9 December 2024
ICCL Enforce responded to the consultation from the European Commission on the application of the definition of AI systems and the prohibitions under the EU AI Act. In our submission we emphasise the need to prohibit two harmful AI systems:
- Behavioural advertising; and
- Recommender systems that exploit vulnerabilities of people.
Behavioural advertising is a manipulative technique that should be prohibited
Behavioural advertising systems are AI systems that expose intimate information about people’s financial circumstances, mental state, and compromising intimate secrets. Our investigation into these systems found codes designating people’s “mental health”, “substance abuse”, “depression”, “gambling”, religion, ideology, “psychographics”, “personal debt”, “bankruptcy”, and even whether they are victims of “incest & abuse”.
Systems already placed on the market have distorted individuals’ behaviour and significantly adversely affected them. For example:
-
- They manipulate voters. Facebook was a “psychological warfare tool” during the Brexit Referendum, impairing voters’ ability to make informed decisions, and causing severe harm.
- They manipulate and harm minorities. Grindr tracked a Catholic priest over 52 weeks, including his use of gay apps and bathhouses. A conservative group used the system to disgrace him in the press and forced him to leave priesthood. He is suing Grindr for millions of dollars in damage.
- The systems (deployed by many, including MediaMath, Adobe, Google, Facebook, Microsoft…) are used by gambling industry to lure gambling addicts with troubled finances back to gambling. This causes adverse psychological and financial impact.
Recommender systems that exploit vulnerabilities of people should be prohibited
Recommender systems are AI systems that profile individuals’ behaviour to determine what content to show them to keep them engaged. Systems already placed on the market have distorted behaviour with significantly adverse impacts:
- TikTok’s internal studies conclude that their system “interferes with essential personal responsibilities like sufficient sleep, work/school responsibilities”. It has “negative mental effects …loss of analytical skills, memory formation, contextual thinking, …and increased anxiety”. TikTok executives said “the younger the user the better the performance”. Researchers found it pushes self-harm and suicide at children.[1]
- YouTube’s system pushes extreme hatred of women at boys.
- Facebook’s internal research says that recommender systems distorted the behaviour in people such that “64% of all extremist group joins are due to our recommendation tools... Our recommendation systems grow the problem”.[2]
See ICCLs related work on artificial intelligence.
Notes
[1] "Driven into the darkness", Amnesty International, 7 November 2023. See https://www.amnesty.org/en/latest/news/2023/11/tiktok-risks-pushing-children-towards-harmful-content/. Reproduced by RTE Prime Time in “13 on TikTok: Self-harm and suicide content shown shocks experts”, 16 April 2024. See https://www.rte.ie/news/primetime/2024/0416/1443731-13-on-tiktok-self-harm-and-suicide-content-shown-shocks-experts.
[2] Meta internal document dated 2016. Released by Frances Hague in “The Facebook files”.