Navigating the EU AI Act - High Risk Systems

The EU Artificial Intelligence Act (AI Act) provides a legal framework for AI developers, deployers, and importers. It officially came into force on 1 August 2024 and adopts a risk-based approach, meaning different obligations apply depending on the level of risk involved.

The EU’s approach to AI regulation ensures that higher-risk AI applications, particularly those that can significantly impact fundamental rights, are either prohibited or subjected to stricter requirements and oversight. High-risk AI systems include those using biometrics or those used in critical infrastructure, education, employment, self-employment, essential private/public services, law enforcement, migration, asylum, border control, justice administration, and democratic processes. AI systems that are safety components of devices or are devices covered by EU product safety legislation are also considered high-risk.

Although the AI Act will be implemented gradually, all provisions relating to high-risk AI practices will become enforceable by August 2027.

In the second webinar in our EU AI Act webinar series, our experts provided a detailed exploration of high-risk AI systems, highlighting the key legal aspects organisations need to be aware of when working with AI systems.

What's on TwoBirds TV?

More Videos