AI & the Workplace: Navigating Prohibited AI Practices in the EU

Artificial Intelligence is reshaping workplaces, enhancing efficiency, and driving innovation. However, as its role expands, so does the need for robust regulatory safeguards to ensure its responsible use.

The AI Act 2024 (Regulation (EU) 2024/1689) lays down harmonised rules for the placing on the market, putting into service and use of AI in the Union, with the aim of striking a balance between the need to promote innovation and the uptake of AI on the one hand, and the concern to ensure a high level of protection of health, safety and fundamental rights in the Union. The AI Act follows a risk-based approach by classifying AI systems into four categories, depending on their potential risks to fundamental rights and European values. 

On 2 February 2025, article 5 of the AI Act came into effect, which sets out strict prohibitions on AI systems deemed unacceptable due to their potential risks to privacy, safety, equality, and fairness.

With a view to increasing legal clarity and ensuring a consistent, effective and uniform application of the AI Act, the EU Commission recently published Guidelines on Prohibited AI Practices, setting out the EU Commission’s interpretation of prohibited AI practices as well as exceptions to the rules. While non-binding (yet to be formally approved at the time of writing), these Guidelines serve as a key reference for businesses navigating AI compliance.

Scope of the AI Act – who does it apply to and what kind of activities? 

The AI Act applies to both AI providers (who develop or distribute AI systems) and deployers (who use AI within the EU, even if they are based outside it). This includes companies using AI for employee management, security, or customer relations. 

Regarding the nature of the activities, the Guidelines provide clarification on each prohibition, listed from article 5 (1), (a) to (h) as well as on the safeguards and conditions that apply to the exceptions (article 5 (2) to (7)). These range from some of the most concerning AI applications, such as harmful manipulation, social scoring and emotion recognition, to biometric categorisation for ‘sensitive’ characteristics and real-time remote biometric identification — practices that, if misused, could erode privacy and infringe on workers' fundamental rights.

Prohibited AI Practices 

1. Manipulative and Deceptive Techniques (Article 5(1)(a))

Prohibited Practice: Employing subliminal or purposefully manipulative techniques with the intent or effect to impair an individual’s ability to make informed decisions, where such manipulation is without the person's awareness and (reasonably likely to) cause significant harm. 

Exception: AI practices that are transparent, consciously perceived by the user, or do not cause harm to remain permitted.

E.g.: Subliminal messaging: Embedding imperceptible images or sounds to influence employees to accept company policies.

Manipulative interfaces: Using deceptive designs to collect sensitive data without users' awareness.

2. Exploitation of Vulnerabilities (Article 5(1)(b))

Prohibited Practice: Exploiting targeted vulnerabilities related to age, disability, or socio-economic status in a way that distorts behaviour and is (reasonably likely to) cause harm.

Exception: AI applications designed to support or benefit individuals—such as accessibility tools or training programs—are allowed, provided they enhance well-being.

E.g.: Targeting older workers with pressure-based retirement incentives using AI analysis of cognitive weaknesses.

AI-driven pressure on financially vulnerable employees to accept unfavourable contract terms.

3. Social Scoring (Article 5(1)(c))

Prohibited Practice: Evaluating individuals based on social behaviour or unrelated characteristics (with data collected from unrelated contexts), leading to unjustified or disproportionate negative treatment. The prohibition may also cover cases where awards or preferential treatment are given to certain individuals, since this implies less favourable treatment of other individuals.

Exception: Legitimate scoring based on relevant and lawful criteria—such as job performance or fraud prevention—is allowed, provided it is proportionate and justified.

E.g.: Profiling employees based on social media activity to assess job performance.

Using unrelated personal data to determine employee benefits.

Preferential treatment in employment or housing allocation based on social scoring profiling.  

4. Predictive Criminal Risk (Article 5(1)(d))

Prohibited Practice: Using AI to predict criminal behaviour of natural persons solely based on profiling or personality traits, without an objective, verifiable basis for predictions.

Exception: AI supporting human decisions with verifiable evidence (e.g., background checks with documented incidents) is permitted.

E.g.: Profiling job applicants solely based on past addresses or psychological traits to predict potential criminal misconduct.

5. Untargeted Facial Scraping (Article 5(1)(e))

Prohibited Practice: Collecting facial images from public sources (such as the internet) or CCTV without targeted purpose, to build facial recognition databases.

Exceptions: Targeted and consent-based data collection for clearly defined security purposes is allowed.

E.g.: Using untargeted facial scraping to build a facial recognition tool for employee monitoring.

6. Emotion Recognition (Article 5(1)(f))

Prohibited Practice: Using AI to infer emotions of individuals in workplaces or educational settings.

Exception: Emotion recognition for medical or safety purposes (e.g., detecting fatigue in drivers) is permitted.

E.g.: Monitoring employee facial expressions during meetings to assess engagement.

Using webcams and voice recognition systems to track employees’ emotions in call centres.

Employing emotion recognition AI during recruitment or probationary periods.

7. Biometric Categorisation (Article 5(1)(g))

Prohibited Practice: Categorising individuals based on biometric data to deduct sensitive characteristics (e.g., race, political beliefs, sexual orientation, trade union membership, etc.).

Exception: Lawful categorisation for law enforcement purposes is permitted under strict conditions.

E.g.: AI systems categorising employees based on facial features to infer religious beliefs.

8. Real-Time Biometric Identification (RBI) (Article 5(1)(h))

Prohibited Practice: Using real-time biometric identification in public spaces for law enforcement purposes.

Exception: Explicit national law exceptions may apply for limited security-related cases (e.g., counterterrorism or missing persons searches).

E.g.: Installing facial recognition cameras at a store entrance using real-time biometric identification. 

Compliance Obligations for Employers

Employers deploying AI systems must ensure they do not inadvertently use prohibited AI. Human oversight, due diligence in vendor selection, and internal training are critical to mitigating legal risks. Companies should:

  1. Audit Existing AI Systems – Conduct a review to identify any practices that could fall under Article 5 prohibitions.
  2. Screen AI Vendors – Ensure AI providers comply with the AI Act’s restrictions and demand transparency regarding system design and safeguards.
  3. Train HR and Legal Teams – Educate staff on AI compliance, including legal risks, ethical considerations, and recognizing manipulative or exploitative AI behaviours.
  4. Implement Safeguards – Establish internal policies to prevent manipulation, discrimination, or unauthorised surveillance.
  5. Monitor Vendor Contracts – Include clauses prohibiting the use of banned AI practices and requiring compliance with the AI Act.

But beware: complying with the AI Act does not necessarily mean full legal conformity. AI use must also align with other EU and national laws such as GDPR, the Digital Services Act, consumer protection laws, and anti-discrimination regulations. These frameworks overlap, and adherence to the AI Act alone does not guarantee full legal compliance

Enforcement and Penalties

Non-compliance with the AI Act carries severe financial penalties with fines up to €35 million or 7% of global turnover, whichever is higher, for private companies.

The EU Commission and national authorities will enforce compliance, and violations may be detected through complaints or proactive investigations.

Interested in the AI Act?

Explore our comprehensive EU AI Act Guide, which provides regulatory insights, compliance recommendations, and key legal resources to help your organisation navigate the AI Act effectively.

Need Assistance?

Our firm’s International Employment Group can assist with AI system audits, compliance strategies, and vendor contract reviews to ensure full alignment with the AI Act.
Contact us today to protect your business against legal risks.

Stay Compliant. Stay Ahead.

Author: Chloé Van Der Belen

Latest insights

More Insights

In-Depth: Space Law – Edition 6

Mar 11 2025

Read More
Curiosity line pink background

Chinas PI Audit Regulation Finally Released: What You Need to Know

Mar 10 2025

Read More
Curiosity line yellow background

New registration scheme and enforcement powers sharpen teeth of Australia’s telco regulator

Mar 07 2025

Read More