23 February 2024

AI Act: what does the EU Regulation on Artificial Intelligence say?

AI Act

The text of the AI Act was finally approved on May 21st by the European Council, becoming the first law in the world to regulate the development and trade of Artificial Intelligence. After gaining approval from the Committee of Permanent Representatives (Coreper) and subsequently also from the European Parliament’s IMCO and LIBE committees and the Parliament itself, the AI Act represents a key step for technological innovation in the coming years.

What is the AI Act?

In 2021, the European Commission proposed for the first time a regulatory framework to govern the development and use of technologies leveraging Artificial Intelligence. The goal was to ensure that all AI systems used within the European Union were safe, transparent, traceable, non-discriminatory, and environmentally respectful. The AI Act represents the conclusion of a approximately three-year journey to define a clear regulation that would bring all EU member states into agreement, addressing doubts and concerns. The agreement was reached in December 2023.

What does the AI Act entail?

The regulation will impose obligations to be followed by providers and users, based on a model that includes four different levels of risk: unacceptable, high, limited, and minimal.

The regulation specifically prohibits various uses of AI, including those that abuse individuals or vulnerable categories, or perform biometric categorizations based on sensitive data such as religious beliefs, political orientation, or sexual orientation. It also prohibits the use of systems like workplace emotion recognition, social scoring, and predictive policing. However, the text allows for exceptions, such as the use of biometric categorization for police purposes with legally obtained data.

The AI Act also addresses the use of real-time facial and biometric recognition, prohibiting it except in three specific situations: searching for victims of crimes or missing persons, certain threats to life or safety, and locating alleged perpetrators of specific crimes such as terrorism, child abuse, kidnappings, illegal arms trafficking, murders, rapes, radioactive material trafficking, organ trafficking, environmental crimes, robberies, sabotage, and participation in criminal organizations. In these cases, the use is allowed only after approval based on checks on the impact on citizens’ fundamental rights and only to confirm the identity of an already under investigation individual.

Particular attention is given to high-risk systems, including biometric identification, security of critical infrastructure, algorithms used in work and school contexts, and law enforcement. Systems used for migration and border control, legal assistance and interpretation, and those used by public or private entities for the allocation of subsidies, refunds, and funding, or risk assessment for insurance are also considered high-risk. Developers of these systems must establish controls, ensure data transparency, and maintain logs throughout the commercial life of the algorithm. Developers must provide technical documents, security information, and undergo monitoring.

High-impact systems must adhere to stricter obligations, undergoing in-depth assessments and immediately reporting any incidents of high relevance to the European Commission. The computing power identified to classify a system as “high-impact” is 10^25 FLOPs (floating point operations per second).

General-purpose AI systems, especially generative AI, are also regulated, with the obligation to label AI-generated content in a recognizable manner and ensure that it is not illegal. Developers must also provide lists of copyrighted material used for learning.

How will the checks be conducted, and what does the regulation entail for violators?

A portion of the checks on the use and development of artificial intelligence systems will be delegated to local authorities, which must establish a regulatory sandbox within two years of the AI Act coming into force to safely test such systems. Additionally, the regulation includes the establishment of an AI Council within the European Commission, consisting of a representative from each EU member state, and may include forums, technical consultants, and an independent committee of scientists and experts.

Finally, sanctions are specified for those who do not comply with the regulation, with fines of up to 35 million euros or 7% of global turnover for prohibited uses and up to 15 million euros or 3% of global turnover for non-compliance with rules for high-risk or general-purpose systems.

Rimani aggiornato sulle novità Brochesia

* Campi obbligatori