Tech companies will not save our kids: ICCL speaks at Oireachtas Children’s Committee


14 February 2024

ICCL opening statement on recommender systems at the Children's Committee of the Irish Parliament and Senate

Dr Johnny Ryan and Dr Kris Shrishak appeared before the Committee on Children to speak about digital platform's A.I. recommender systems. 

See video of Dr Ryan's opening remarks here. 


video
play-sharp-fill

Opening statement by Dr Johnny Ryan FRHistS, Director of Enforce Unit of Irish Council for Civil Liberties at the Committee on Children, Disability, Equality and Integration, 13 February 2024 

Thank you Cathaoirleach.

Artificial intelligence is not a future technology.

TikTok, YouTube, Snapchat, and Instagram use it to shape the world that our children see through their platforms every day.

Their A.I. builds a tailored diet of content and PUSHES IT in each child’s feed. This A.I. is known as a “Recommender System”.

Recommender systems feed each person a personalised diet of content estimated to provoke or outrage that person. This is bad news for society, but good news for the tech companies. It keeps the person on the platform longer, increasing advertising opportunities.

Here six examples of how this A.I. artificially distorts the world for our children.

  • The U.N. said Meta played a “determining role” in Myanmar's 2017 genocide.[1]

  • This month, lawyers for Rohingya refugees put the blame on Facebook’s recommender system, which they said “magnified hate speech through its algorithm”.[2]
  • Nearly three quarters of problematic[3] YouTube content seen by more than thirty seven thousand test volunteers was shows to them because it was pushed at them by YouTube’s own recommender system.[4]
  • Investigation by the Anti-Defamation League showed that Facebook, Instagram, and X late last year are filling 14-year-old test users’ feeds with hate and conspiracy.[5]
  • Investigation by the Institute for Strategic Dialogue found that YouTube’s recommender system routinely pushes extremely misogynistic hateful material – hatred of women – in to boys’ feeds.[6]
  • Uplift shared a story from a member about recommender systems. I will read two lines: “My beautiful, intelligent, accomplished niece was encouraged, incited to see suicide as a romantic way to end her life. She did end it.” 
  • Entirely separately, investigation by Amnesty International revealed how this happens.
    Just one hour after their researchers started a TikTok account posing as a 13-year-old girl who views mental health content, TikTok’s A.I. started to show the child videos glamourising suicide.[7]

Meta. Youtube. Instagram. X. Tiktok. Their A.I. “recommender systems” manipulate and addict our kids, and promote childhood hurt, hate, self-loathing, and suicide.

So what can be done? 

The first step is to acknowledge – at long last – that we cannot put our faith in voluntary action by tech companies.

Technology corporations have a very poor record of self-improvement and responsible behaviour even when they know their technology is harmful.[8] Even when lives are at stake – as they were in their thousands in Myanmar.[9]

The lesson is: tech corporations will not save our children.

We must stare this problem in the face.

We have to take up the tools to fix it.

Coimisiún na Meán’s forthcoming binding Code for video platforms[10] is anticipated to introduce an important rule: that recommender systems based on profiling must be off by default until a person makes the decision to switch them on.[11]

We and more than sixty organisations across Ireland have written to urge Coimisiún na Meán to introduce this rule and to go farther: we are telling them that they have to make that rule inescapably binding so that big tech cant wriggle out of it.[12]

Eighty two percent of the Irish public supports a binding rule that profile-based recommender systems must be off by default. That’s according to polling by Ireland Thinks - just last month.[13]

This overwhelming support for a binding rule switching these recommender systems off by default crosses the divisions of age, education, and income.

And there is overwhelming international support for this too. Coimisiún na Meán is leading the world.

In Brussels, a cross-party group of MEPs has formally wrote to the European Commission in December to urge that Coimisiún na Meán’s proposed rule be applied across the whole EU.[14]

United States Federal Trade Commissioner Alvaro Bedoya himself took to Twitter recently to praise Coimisiún na Meán’s proposed rule as a model for the White House to follow.[15]

So it’s clear: we all want binding rules that switch A.I. recommender systems off by default.

But it remains to be seen whether Coimisiún na Meán will in fact introduce this rule in a strictly binding form in its final Code. This will be strongly opposed by the big tech corporations who put our children in harms way. Coimisiún na Meán will have to be resolute.

We at ICCL urge Committee Members to press Coimisiún na Meán to ensure that recommender systems are off by default, and make this a strict, binding rule.

We have the tools to address this crisis. We need to pick them up and confront the problem. Ireland can and Ireland should lead the world.

Notes:

[1] U.N. investigators found that Meta played a “determining role” in Myanmar’s 2017 genocide. Amnesty International reported Meta’s algorithms were key contributors. See “U.N. investigators cite Facebook role in Myanmar crisis”, Reuters, 12 March 2018 (URL: https://www.reuters.com/article/us-myanmar-rohingya-facebook/u-n-investigators-cite-facebook-role-in-myanmar-crisis-idUSKCN1GO2PN) and “The social atrocity: Meta and the right to remedy for the Rohingya”, Amnesty International, 2022 (URL: https://www.amnesty.org/en/documents/ASA16/5933/2022/en/), pp. 45-48, p. 71.

[2] "Rohingya Refugees File Petition Against Facebook in Indian Court", Voice of America, 3 February 2024 (URL: https://www.voanews.com/a/rohingya-refugees-file-petition-against-facebook-in-indian-court-/7470093.html).

[3] "YouTube Regrets: A crowdsourced investigation into YouTube's recommendation algorithm", Mozilla, July 2021 (URL: https://assets.mofoprod.net/network/documents/Mozilla_YouTube_Regrets_Report.pdf), pp 9-13.

[4] ibid. p. 17.

[5] "From Bad To Worse: Amplification and Auto-Generation of Hate", ADL, 16 August 2023 (URL: https://www.adl.org/resources/report/bad-worse-amplification-and-auto-generation-hate )

[6] Algorithms as a weapon against women, Institute for Strategic Dialogue, April 2022 (URL: https://www.isdglobal.org/wp-content/uploads/2022/04/Algorithms-as-a-weapon-against-women-ISD-RESET.pdf).

[7] "Driven into the darkness", Amnesty International, 7 November 2023 (URL: https://www.amnesty.org/en/latest/news/2023/11/tiktok-risks-pushing-children-towards-harmful-content/).

[8] Despite internal concern about amplifying hazardous content, from 2017 to 2020 Meta strongly amplified[8] posts that received “emoji” reactions from other people. Then, despite internal research in 2019 confirming that content receiving “angry emojis” was more likely to be misinformation, it persisted until late 2020. "Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation", Washington Post, 26 October 2021 (URL: https://www.washingtonpost.com/technology/2021/10/26/facebook-angry-emoji-algorithm/).

[9] See previous reference to Myanmar genocide.

[10] The Code will cover Facebook, Instagram, YouTube, Udemy, TikTok, LinkedIn, X, Pinterest, Tumblr, and Reddit.

[11] “…that recommender algorithms based on profiling are turned off by default; 
…that algorithms that engage explicitly or implicitly with special category data such as political views, sexuality, religion, ethnicity or health should have these aspects turned off by default;”
Section 1.3 of Appendix 3 of the Draft Code.

[12] "More than 60 organisations urge strong action by Coimisiún na Meán on “recommender system” algorithms", ICCL, 31 January 2024 (URL: https://www.iccl.ie/news/62-organisations-urge-strong-action-by-coimisiun-na-mean-on-recommender-system-algorithms/).

[13] “82% of the Irish public wants Big Tech’s toxic algorithms switched off”, ICCL, 22 January 2024 (URL: https://www.iccl.ie/news/82-of-the-irish-public-wants-big-techs-toxic-algorithms-switched-off/).
Question: “Would you be in favour of social media companies being forced to stop building up specific data about you (your sexual desires, political and religious views, health conditions and or ethnicity) and using that data to pick what videos are shown to you (unless you have asked them to do this)?”.
Yes: 82%
No: 12%
Not sure: 6%

[14] “Big Tech’s divisive ‘personalization’ attracts fresh call for profiling-based content feeds to be off by default in EU”, TechCrunch, 20 December 2023 (URL: https://techcrunch.com/2023/12/20/dsa-recommender-systems/).

[15] https://x.com/BedoyaFTC/status/1744450499791695938?s=20 and https://x.com/BedoyaFTC/status/1749853979108913441?s=20.