Countdown to compliance as EU AI Act set to enter into force

Written By

paula alexe Module
Paula Alexe

Regulatory and Public Affairs Advisor
Belgium

francine cunningham Module
Francine Cunningham

Regulatory and Public Affairs Director
Belgium Ireland

paolo sasdelli Module
Paolo Sasdelli

Regulatory and Public Affairs Advisor
Belgium

One of the most significant pieces of legislation to be adopted by the outgoing EU mandate, the Artificial Intelligence Act (AI Act) was finally published in the Official Journal on 12 July 2024. It is due to come into force on 1 August 2024 and will become applicable in a phased manner from between six and 36 months thereafter, with most provisions applying after 24 months.

As the world’s most comprehensive legislative framework for AI developers, deployers and importers. the new Regulation seeks to guarantee that AI systems placed on the European Union internal market are secure, uphold current laws regarding fundamental rights and adhere to EU values.

Key Obligations

Taking the form of a Regulation, the AI Act is directly applicable on the EU’s 27 Member States. The Act takes what the EU institutions have described as a “risk-based approach”:

  • Prohibited AI practices: includes AI practices violating fundamental rights such as social scoring, exploiting people's vulnerabilities, using subliminal techniques, real-time biometric identification in public spaces (with limited exceptions), certain forms of individual predictive policing, emotion recognition in workplaces and schools, in addition to untargeted scraping of internet or CCTV footage for facial images to build databases.
  • High Risk AI systems: include AI systems used in biometrics, critical infrastructure, education, employment, self-employment, essential private/public services, law enforcement, migration, asylum, border control, justice administration and democratic processes. AI systems which are safety components of devices or are devices covered by EU product safety legislation are also considered high-risk. Requirements for these systems include pre-market conformity assessment, risk management, data governance, technical documentation provision, record keeping, transparency and human oversight. High-risk AI systems deployed by public authorities or related entities must be registered in a public EU database.
  • Transparency obligations for certain AI systems: providers of AI systems intended to interact directly with natural persons or which generate synthetic audio, image, video or text content will be subject to transparency obligations. Deployers of emotion recognition or biometric categorisation systems and deployers of AI systems that generate or manipulate image, audio or video content constituting a deep fake are also subject to their own transparency obligations.
  • General Purpose AI (GPAI) systems and models: risk categorisation is based on model capability rather than application. Two risk categories exist: GPAI models entailing systemic risk and all other GPAI models. Providers of systemic risk GPAI models have more compliance requirements. 
    • GPAI Models: providers must maintain technical documentation and share information with potential users about the capabilities and limitations of the model. They must also draw up and make publicly available a ‘sufficiently detailed summary’ about the content used for training of the model. A code of practice will be drawn up by the AI Office. 
    • GPAI Models with Systemic Risks: regarded as having systemic risk if they have high impact capabilities or are designated as such by the European Commission. A model is presumed to have high impact capabilities if the compute used for training measures 10^25 or more Floating Point Operations Per Second (FLOPs). Providers of these models have additional obligations like model evaluations, systemic risk mitigation, incident reporting and ensuring cybersecurity protection. A code of practice is also recommended.

Timeline and enforcement structure

Provisions regarding prohibitions will apply already six months after the Regulation’s entry into force on 1 August 2024, while requirements for GPAI will apply 12 months after. Most of the other provisions in the AI Act will apply 24 months after it enters into force, while some specific requirements for high-risk AI systems will apply after 36 months.

With regard to enforcement, the European Commission has established an AI Office on 16 June 2024, which is now located within the Commission’s Directorate General for Communications Networks, Content and Technology (DG CNECT), under the leadership of Lucilla Sioli. The AI Office has exclusive authority over GPAI models and has responsibility for developing the EU’s expertise and capabilities in the field of artificial intelligence.

The European AI Office has been restructured into five units and two advisory roles. These are:

  • Excellence in AI and Robotics Unit
  • Regulation and Compliance Unit
  • AI Safety Unit
  • AI Innovation and Policy Coordination Unit
  • AI for Societal Good Unit
  • Lead Scientific Advisor
  • Advisor for International Affairs

Overall, the AI Office is expected to employ over 140 staffers, including economists, technology specialists, administrative assistants, lawyers and policy specialists.

In addition to the AI Office, Member States will be required to appoint national competent authorities who will have responsibility for supervising the application and implementation regarding conformity to high-risk and prohibitions. An AI Board, comprising representatives from these Member States, will be established with the aim of providing a coherent implementation of Regulation. An Advisory Forum of stakeholders and a Scientific Panel of independent experts will also be established.

Potential penalties

Penalties for infringements of the new Regulation can reach up to EUR 35 million or 7% of annual global turnover, and up to EUR 15 million or 3% depending on the violation. Incorrect reports can result in penalties of up to EUR 7.5 million or 1.5% of annual turnover. Additionally, providers can be forced to withdraw non-compliant AI systems from the market.

Next Steps

The European Commission is planning to come forward with around 20 follow up documents by August 2026 including secondary legislation (Delegated Acts and Implementing Acts), guidelines and templates, in addition to codes of conduct, to support implementation of the AI Act. This is in addition to work on codes of practice to demonstrate compliance for general-purpose AI models until harmonised standards are established. Subsequently, the Commission may grant these codes EU-wide validity through an Implementing Act. Looking ahead, the new European Parliament is also expected to focus on the relationship between AI and copyright and on the use of AI in the workplace.

How we can help you prepare

As a resource for your organisation, you can access Bird & Bird’s EU AI Act flyer here.

Furthermore, to guide you through the key aspects of the Act, our multi-disciplinary team is producing an AI Act Guide which highlights the most important actions organisations should take in seeking to comply with the new Regulation. If you’d like to be the first to receive the Guide in your inbox upon launch, please register your interest using our form here.

For further information, please contact Francine Cunningham, Paolo Sasdelli and Paula Alexe.