New EU AI Act guidelines: what are the implications for businesses?

Written By

toby bond module
Toby Bond

Partner
UK

I'm a partner in our Intellectual Property Group. Having studied physics at university, I'm fascinated by technology and the way in which it is reshaping our world.

kate deniston Module
Kate Deniston

Professional Support Lawyer
UK

As the knowledge lawyer in our Tech Transactions team, I play a key role in keeping both clients and colleagues at the forefront of tech-related legal and market developments.

This article first appeared in the March 2025 issue of PLC Magazine”: http://uk.practicallaw.com/resources/uk-publications/plc-magazine

The EU AI Act (2024/1689/EU) entered into force on 1 August 2024 and, among other measures, it prohibits certain AI practices, such as social scoring and predictive policing (Article 5). The European Commission (the Commission) is required to develop various guidelines on the practical implementation of the AI Act (Article 96). 

Accordingly, after a consultation in late 2024, the Commission has now published drafts of the long-awaited guidelines on prohibited AI practices and the definition of an “AI system”, published on 4 and 6 February 2025 respectively. This was later than expected given that Article 5 was implemented on 2 February 2025. 

The guidelines are intended to promote the consistent application of the AI Act across the EU but they are non-binding; any authoritative interpretation of the AI Act may ultimately only be given by the European Court of Justice. The guidelines have been approved by the Commission but have not yet been officially adopted. The guidelines have potential implications for businesses established both inside and outside the EU.

Prohibited AI practices guidelines

The guidelines on prohibited AI practices begin by providing detailed guidance on the scope of Article 5 of the AI Act. In doing so, they provide useful clarifications on various aspects of Articles 2 (Scope) and 3 (Definitions). For example, they clarify that the making available of an AI system is covered under the AI Act regardless of the means of supply, such as accessing the system through an application programming interface, the cloud, direct download, as physical copies or embedded in physical products. That follows the traditional interpretation in EU product safety law of interpretative and non-binding guidance. The guidelines also clarify that for the purposes of Article 5, references to the “use” of an AI system include any misuse of the system that may amount to a prohibited practice.

Given that infringements of Article 5 carry the highest penalties and interfere the most with the freedoms of others, the guidelines clarify that the scope of the prohibitions should be interpreted narrowly (see box "Article 5 prohibited practices"). This may give a degree of comfort to businesses. 

For each prohibited practice, the guidelines look at the rationale and objectives behind the ban, the meaning of the component wording of the prohibition, scenarios and examples, what would be out of scope and the interplay with other law. 

Emotion recognition 

One particular prohibited AI practice which businesses are paying particular attention to is the ban on the placing on the market, putting into service, or use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is for medical or safety reasons (Article 5(1)(f)). 

Many questions had arisen from the AI Act’s wording, but the detailed guidelines help significantly in clearing up many of the queries. The term “workplace” is a broad concept and includes any setting where the work is performed, so spaces such as shops, cars and temporary sites would count. The use of webcams and voice recognition systems by a call centre to track their employees’ emotions, such as anger, is prohibited. However, the use of voice recognition systems by a call centre to track their customers’ emotions, such as anger or impatience, is not prohibited by Article 5, provided that they do not track the employees’ emotions at the same time. 

AI systems that monitor the emotional tone in hybrid work teams by identifying and inferring emotions from voice and imagery of hybrid video calls are prohibited. Using emotion recognition to assess employees’ well-being, motivation levels, and job or learning satisfaction does not qualify as "medical reasons" and would be prohibited. 

Harmful manipulation and deception

Another prohibited AI practice that is of interest to businesses is the placing on the market, putting into service, or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting the behaviour of a person or persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, or others, significant harm (Article 5(1)(a)). 

The guidelines state that the concept of material distortion goes beyond lawful persuasion, which falls outside the scope of the prohibition. The threshold for whether a harm caused is “significant” is also addressed. Certain considerations could be taken into account including the degree of harm, whether the harm affects a large number of people and whether it is long-lasting and irreversible. 

Definition of AI system guidelines

In the guidelines on the definition of an AI system, the definition in the AI Act is broken down into six cumulative elements: 

  • A machine-based system.
  • That is designed to operate with varying levels of autonomy (element 2).
  • That may exhibit adaptiveness after deployment.
  • That, for explicit or implicit objectives.
  • Infers, from the input it receives, how to generate outputs (element 5).
  • That can influence physical or virtual environments. 

The guidelines also address a separate seventh element comprising a list of examples, such as predictions, content, recommendations or decisions, that pertain to element 5.

The guidelines explain how the definition adopts a lifecycle-based perspective encompassing the building phase (pre-deployment) and the use phase (post-deployment). The definition is intended to accommodate a wide range of AI systems. The guidelines break down and clarify each element of the definition in detail. 

They advise that the determination of whether a software system is an AI system should be based on the specific architecture and functionality of a given system and should consider the elements of the definition. 

Of the six cumulative elements, the most crucial ones for determining whether an AI system is involved, as opposed to traditional software, are the “levels of autonomy” and the “ability to infer”. The two notions go hand in hand because an AI system's capacity for inference is key to bring about its autonomy. 

Regarding the level of autonomy (element 2), the system must operate with some degree of independence of actions from human involvement and human intervention. A system with no autonomy would not be an AI system but full autonomy is not required; it needs to be autonomous to “some degree”. 

Regarding the system’s ability to infer (element 5), AI techniques that enable inference include machine learning and logic and knowledge-based approaches. Systems that are based on rules defined solely by humans to automatically execute operations do not fall within the definition of an AI system. Companies in the financial sector can breathe a sigh of relief because logistic regression is now excluded from the definition, an issue that had been hotly debated.

The potential implications 

Regarding the prohibition guidelines, appropriate AI governance for both providers and deployers must take account of the ban on prohibited practices. Businesses should establish processes to confirm that a system does not concern a prohibited AI practice. Where a provider has confirmed that this is the case, it should prepare relevant information and documents for customers in the EU and put in place contractual safeguards to ensure that the system will not be used for prohibited practices. 

The guidelines on prohibited AI practices state that, in providers’ contractual relationships with deployers (in the terms of use of the AI system), providers are expected to exclude use of their AI system for prohibited practices and provide appropriate information in the instructions of use for deployers and regarding the human oversight that is required.

The guidelines also state that providers would be expected to take appropriate measures if they become aware that the AI system is misused for a specific prohibited purpose by, say, specific deployers, where that misuse is reported or the provider becomes otherwise aware; for example, because the provider controls the operation of the system through a platform and becomes aware when performing checks. 

Deployers should have checklists in place to ensure that the AI system they want to deploy is not carrying out prohibited practices, such as emotion recognition. These checklists should be integrated into a deployer’s procurement processes to verify compliance. For example, an emotion recognition system used in customer support calls should carve out the staff and only target customers. The deployer would still need to consider whether the product would fall into a high-risk category under Annex III, but that is a separate issue. 

There are still many grey areas. In particular, providers of potentially prohibited systems or practices should carefully review any fringe cases and should document their thinking so that they can provide this to regulators if the circumstances arise. 

In terms of the definition of an AI system, businesses now need to pay particular attention to the new guidance when creating their AI inventory because any misclassification of a system could result in overlooking obligations and exposing themselves to potential fines.

Toby Bond is a partner and Kate Deniston is a professional support lawyer at Bird & Bird. With thanks to Nils Lölfing, Oliver Belitz and Nora Santalu at Bird & Bird.

The guidelines on prohibited AI practices are available at https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act, and the guidelines on the definition of an AI system are available at https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-ai-system-definition-facilitate-first-ai-acts-rules-application.

Article 5 prohibited practices

Prohibited AI practice  Provision in EU AI Act (2024/1689/EU) 
Harmful manipulation, and deception Article 5(1)(a) 
Harmful exploitation of vulnerabilities Article 5(1)(b) 
Social scoring  Article 5(1)(c) 
Individual criminal offence risk assessment and prediction  Article 5(1)(d) 
Untargeted scraping to develop facial recognition databases  Article 5(1)(e) 
Emotion recognition  Article 5(1)(f) 
Biometric categorisation  Article 5(1)(g) 
Real-time remote biometric identification  Article 5(1)(h) 

Latest insights

More Insights
featured image

Influencers and Brand Criticism: Between Freedom of Expression and Legal Responsibility

4 minutes Feb 28 2025

Read More
Curiosity line green background

OfDIA announces the Gamma Trust Framework

5 minutes Feb 26 2025

Read More
cameras

Connected - February 2025

Feb 26 2025

Read More