The final sprint for the EU’s AI Act

Written By

shima abbady Module
Shima Abbady

Associate
Netherlands

I am a senior associate in our Commercial group in The Hague where I specialise in advice and litigation relating to AI, Data and Technology. I am a member of our Artificial Intelligence, Data Protection and Tech & Comms groups. I am also a PhD Researcher in the field of AI regulation.

The trilogue negotiations for the AI Act are in their final stages, with the European Parliament, Council, and Commission aiming to reach an agreement on a final text. Key points of contention include the definition of the term 'AI-system,' which will affect the scope of the definition, and which categories of systems will qualify as high risk and which will be banned. The Parliament is proposing new categories, and the proposed penalties for non-compliance are high. The aim is to ensure that the AI Act will become the first comprehensive regulatory framework for AI and set standards for the rest of the world.

The AI Act is currently in the last of the legislative phases: the trilogue negotiations between the European Commission, Council and Parliament. The Commission published its proposal for the AI Act in June 2021, and the Council reached its final compromise text last November. The European Parliament adopted its position on the AI Act in the afternoon of last 14 June, after which the trilogue negotiations commenced immediately that night.

The trilogue negotiations aim to reach an agreement on a final text, meaning that differences between the parties must be resolved or compromised. The definition of the term ‘AI-system’ was a key point of contention. Initially, the Commission’s proposal took a broad view of AI-system, which the Council then limited. The Parliament ultimately broadened the definition again to align it with OECD standards. Given that the AI Act will be based on maximum harmonisation, the scope of the definition is an important element.

Another point of negotiation is which categories of systems will qualify as high risk and which will be banned. For example, the Commission and Council largely agreed on these topics. However, the Parliament is proposing new categories, such as recommender systems of very large online platforms and systems making inferences based on biometrics data, including emotion recognition, to be regulated as high risk. Parliament also wishes to significantly amend the list of prohibited practices, including, for example, a total ban on realtime remote biometric identification in public spaces.

How the EU should regulate foundation models will be another hot topic, which the Commission had not accounted for in its proposal and the Council only marginally addressed by including 'general purpose AI-systems' in its own compromise text. On the other hand, the Parliament determined its position in the wake of ChatGPT, proposing a stringent set of additional rules for providers of foundation models, including an obligation to make available a sufficiently detailed summary of the use of training data protected under copyright law.

The Parliament proposal also places more emphasis on AI value-chain governance. For example, unfair contractual terms imposed on SMEs or startups in the AI value chain should be prohibited, and the Commission should propose model clauses to guide contracting. A closer look at the AI value chain also resulted in Parliament wanting to impose new requirements on deployers, such as the obligation to implement appropriate technical and organisational measures and to carry out a fundamental rights impact assessment.

To ensure that companies comply with all these obligations, the Commission and Council propose maximum penalties of up to €30 million or 6% of the total worldwide turnover for the preceding financial year. Parliament wants to go even higher: up to €40 million or 7% of turnover.

The above are only a few examples of the different positions the parties will need to hash out. However, as the timing of the start of the trilogue negotiations indicates, the parties are in a hurry. AI has become a major political topic in recent months, and the EU hopes to set standards for the rest of the world by being the first to enact a comprehensive regulatory framework for AI. The institutions may reach a political agreement on the final text before the end of this calendar year, after which it will take some months to be published in the Official Journal of the European Union. The AI Act will enter into force on the 20th day following such publication, followed by a grace period. The parties' proposals regarding this grace period range from 24-36 months for the substantive provisions of the AI Act. The 24 month-period currently appears more likely to make it into the final text. As such, the AI Act may apply from the first half of 2026.

Nuances in the final text and specific obligations such as standards still remain, but now that all parties have determined their positions, the AI Act is coming together. As such, many companies are starting to prepare for the AI Act. Any company developing or deploying an AI-system regulated by the AI Act will have obligations to comply with, which will, due to their nature, likely require preparation far in advance. Such preparation may also entail more than just compliance-related activities. For example, parties who wish to provide input for the negotiations still have opportunities; consultation is possible with EPP Shadow Rapporteurs until 31 July.

For more information contact Shima Abbady.

Sign up for our Connected newsletter for a monthly round-up from our Regulatory & Public Affairs team.

Latest insights

More Insights
collection of files with coloured bulldog clips

Key digital takeaways from the hearings of incoming Commissioners

Dec 03 2024

Read More
Curiosity line teal background

ENISA Implementing Guidance on NIS2 security measures - draft for consultation

Dec 03 2024

Read More
electronic fingerprint

Version 4: OfDIA announces the gamma trust framework

Dec 03 2024

Read More