EUs AI Liability regime should apply strict liability

14 December 2022

Last month, ICCL responded to the European Commission's consultation on artificial intelligence liability.

On 28 November 2022 ICCL makes two recommendations to better protect people harmed by AI systems:
1. hold companies liable regardless of the intent and fault; and
2. provide clarity on how immaterial harms and societal harms will be addressed. 

See ICCLs related work on artificial intelligence.

Image

28 November 2022 

 Submission to the consultation on AI Liability 

Dear colleagues,

  1. I write on behalf of the Irish Council for Civil Liberties (ICCL), Ireland’s oldest independent human rights monitoring organisation. We welcome the opportunity to respond to the consultation on the Artificial Intelligence Liability Directive (AILD).[1]

  2. The AILD attempts to address the problems posed by artificial intelligence (AI) systems to the liability rules in the European Union (EU). It attempts to reduce the difficulty for “victims to identify the liable person and prove the requirements for a successful liability claim”.[2]

  3. Despite this, the burden on victims remains too high. We have two recommendations to better protect victims:
    1. Apply strict liability regime for all AI systems across the union.
    2. Adequately address immaterial damages and societal harms.

We explain our reasons for these recommendations.

Apply a strict liability regime for all AI systems across the union

  1. The AILD relies on a fault-based liability regime, where the victim must prove that the harm from the AI system was caused by the fault of the defendant. In contrast, in a strict liability regime, the victim does not have to show causation. The victim only shows that the AI systems harmed them and the defendant is held liable regardless of the intent and fault. The European Parliament, in its own-initiative resolution, had already called for a strict liability regime for certain AI systems.[3] The Commission dismissed this in its explanatory memorandum.[4] This is a mistake.

  2. First, victims will often not know if the damage they suffered involved an AI system. Even if they do, AI systems are complex and difficult to understand. This Directive provides no mechanism to correct this. The EU AI database[5] set up under the AI Act does not aid here: it is limited to high-risk AI systems and it does not register users[6] of AI systems.

  3. Both users and providers[7] of non-high-risk AI systems are hard to identify because the EU AI database is limited to high-risk AI systems.

  4. The victim has no opportunity to make “proportionate attempts”[8] to gather evidence because it is disproportionately difficult for them to even identify which providers and users of AI systems have caused harm.

  5. Second, the information asymmetry between the defendant and the victim is immense. Even when the victims have identified the relevant defendant and evidence has been disclosed, the victims will have difficulty interpreting the technical evidence. Should an average person be expected to assess whether an AI system was designed with “appropriate level of accuracy, robustness and cybersecurity”?[9] That assessment requires technical expertise, which is readily available to industry but not to victims. The Directive should be amended to remove this barrier to victims’ claims.

  6. Third, if, despite the hurdles, the victim somehow manages to make a liability claim, the defendant who has developed and/or deployed the AI system could provide alternative interpretation of the evidence to rebut the presumption of causal link.[10] Thus, ”rebuttable presumption”[11] is insufficient to protect victims when information asymmetry is significant.

  7. Fourth, the operators of non-high-risk AI systems may not even have created basic documentation for their AI systems. They are not required to fulfil the requirements in Title III Chapter 2 of the AI Act that applies to high-risk AI systems. Without access to such documentation, it will be nearly impossible for victims to prove causal link between the harm and the defendant. For such AI systems, it is even more necessary to apply strict liability.

  8. Fifth, we are concerned that the provision for “trade secret or alleged trade secret”[12] will allow defendants to refuse disclosure of relevant documents to victims. We believe that a strict liability regime will benefit victims while not requiring defendants to share trade secrets.

Adequately address immaterial damages and societal harms

  1. First, AI systems can cause immaterial damages, which include infringement of fundamental rights[13] such as, according to the Commission itself, “loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment”.[14]

  2. AILD relies on the civil law of Member States to protect people from many fundamental rights infringements, including violation of personal dignity, right to equality and non-discrimination, and respect for private and family life.[15] This is deeply concerning because “immaterial damage is generally non-recoverable” under civil law in many member states including Germany and Italy.[16]

  3. AILD should provide for immaterial damages in Article 2 and harmonise the compensation for these damages across the Union. Article 4 (6) of Product Liability Directive[17] defines damages for that Directive. AILD should take a similar approach to provide legal certainty.

  4. Second, the AILD lacks clarity on liability claims for societal harms including environmental harms, and manipulation of people through the generation of fake content.[18] Recently an AI system was released to the public with the claim that it “can store, combine and reason about scientific knowledge”.[19] This AI system was soon withdrawn after it was observed to have “made up fake papers (sometimes attributing them to real authors)”.[20] Such AI systems could produce social harms including reducing trust in scientific information.

  5. Social harms should be explicitly addressed in this Directive. The compensation for such damages should not rely on member state liability rules, but should be uniform across the Union.[21]

  6. We at ICCL are at your disposal to discuss these further.

Sincerely,

Image

Dr Kris Shrishak
Technology Fellow, ICCL

Notes

[1] Proposal for a directive of the European parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), 28 September 2022. URL:  https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807

[2] Ibid. p. 1.

[3] European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). URL: https://www.europarl.europa.eu/doceo/document/TA-9-2020-0276_EN.html

[4] AI Liability Directive, p. 14. Article 5 (2) states that strict liability will be considered during the evaluation of AILD five years after the transposition period.

[5] Artificial Intelligence Act, Article 60.

[6] Ibid. Article 3(4).

[7] Ibid. Article 3(2).

[8] AI Liability Directive, Article 3 (2): “the national court shall only order the disclosure of the evidence … if the claimant has undertaken all proportionate attempts at gathering the relevant evidence from the defendant.”

[9] Ibid. Article 4 (2) (d).

[10] A. Bertolini, 'Artificial Intelligence and Civil Liability', Study requested by the JURI committee. p. 84. URL: https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf

[11] AI Liability Directive, Article 4.

[12] Ibid. Article 3 (4).

[13] Ibid. p. 9.

[14] European Commission, White Paper On Artificial Intelligence - A European approach to excellence and trust, COM(2020) 65 final. p. 10. URL: https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

[15] AI Liability Directive, p. 10.

[16] AI Liability Directive Impact Assessment. Commission Staff Working Document, Impact Assessment Report accompanying the document Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence, COM(2022) 469 final, 28 September 2022. p. 164-5. 

URL: https://ec.europa.eu/info/sites/default/files/1_4_197608_impact_asse_dir_ai_en.pdf

[17] Proposal for a Directive of the European Parliament and of the Council on liability for defective products, COM(2022) 495 final, 28 September 2022. URL: https://single-market-economy.ec.europa.eu/document/3193da9a-cecb-44ad-9a9c-7b6b23220bcd_en

[18] Ibid. p. 157, and AI Liability Directive, Recital 12. Societal risks or harms are mentioned by the Commission in the context of liability exemptions harmonised under the Digital Services Act, which covers recommender systems.

[19] Taylor et al. "Galactica: A Large Language Model for Science".  URL: https://galactica.org/static/paper.pdf

[20] Will Douglas Heaven, "Why Meta’s latest large language model survived only three days online", 18 November 2022. URL: https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/

[21] AI Liability Directive Impact Assessment, footnote 295. The Commission refers to advocate general Øe’s opinion in case C-682/18, Frank Peterson v. Google LLC, YouTube LLC, e.a., ECLI:EU:C:2020:586 to suggest that claims may be made when the damage is "induced by the functioning of the AI systems". However, this is only possible "where MS' liability rules do offer an avenue to compensation".