ICCL recommendations in European Commission’s AI standardisation request

12 June 2023

The European Commission sent a request to European standard-setting bodies on 22 May 2023[1] for technical standards related to the requirements for high-risk artificial intelligence (AI) systems in the AI Act. These standards will be as important as the AI Act itself. Companies that satisfy these standards will be treated as having fulfilled the requirements of the AI Act, which makes it important to get these standards right.

In early 2022, ICCL pointed out technical errors in the AI Act, one of which was that the proposed regulation incorrectly relied on accuracy metrics. Accuracy is often misleading and is only one amongst many performance metrics. The importance of using the correct performance metric cannot be overstated. In applications such as cancer diagnosis, using the wrong metric could be dangerous: an AI system with high “accuracy” may give the illusion of high performance even if it had a high false negative rate, which could be fatal.

The European Commission has addressed this issue in the standardisation request:

"“accuracy” shall be understood as referring to the capability of the AI system to perform the task for which it has been designed. This should not be confused with the narrower definition of statistical accuracy, which is one of several possible metrics for evaluating the performance of AI systems."[2]

Although the term “accuracy” is still misleading, the Commission’s clarification that they use this term to refer to “the capability of the AI system to perform the task for which it has been designed” is a step in the right direction.

Civil society organisations including ICCL have also questioned the Commission’s approach to using technical standards for socio-technical systems such as AI. In particular, standard-setting bodies may have neither the expertise nor the incentive to serve the public interest and to address the fundamental rights implications.

On this issue, the Commission says

“The public interest should take a prominent role when executing this standardisation request, given its importance for the development and the deployment of AI.”[3]

This is important, but it is unclear how the Commission will check whether public interest is upheld when some participants in European standard-setting bodies have business models that may not serve the public interest.




Notes

[1] Commission implementing decision of 22.5.2023 on a standardisation request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence.

[2] Annexes to the Commission implementing decision of 22.5.2023 on a standardisation request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence, Annex II, Section 2.6 Accuracy specifications for AI systems.

[3] Commission implementing decision of 22.5.2023 on a standardisation request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence. Paragraph 14.