Biometrics under the EU AI Act

Written By

ruth boardman module
Ruth Boardman

Partner
UK

I am based in London and co-head Bird & Bird's International Privacy and Data Protection Group. I enjoy providing practical advice and solutions to complex legal issues.

nora santalu Module
Nora Santalu

Associate
UK

I'm an associate in the privacy and data protection team in London. I advise on the GDPR, the EU AI Act as well as ePrivacy rules with a particular focus on the regulation of biometrics and fraud prevention.

The issue of regulating biometric AI systems dominated the June 2023 debates in the European Parliamentary votes and has been a contentious issue since the European Commission first published the proposal.

There are significant differences between the approaches of the three European institutions. This article shines a light on how the proposals compare and interact with GDPR and draws attention to areas of uncertainty, particularly in relation to financial services use of AI for fraud prevention. The article also touches upon the high-risk classification of emotion recognition and biometric categorisation tools under the EU AI Act and how this links to special category data under GDPR.

Simply by looking at the number of biometric data related definitions, it is evident that the AI Act puts significant emphasis on systems using biometric or biometric-based data and has a more sophisticated approach to regulating these systems. The GDPR has only one definition relating to biometrics (biometric data under Article 4(14)). On the other hand, the Commission’s proposal for the AI Act contains 6 biometrics related definitions (biometric data, emotion recognition system, biometric categorisation system, remote biometric identification system, real-time remote biometric identification system and post-remote biometric identification system).

The EP position paper adds three further definitions to this (biometric-based data, biometric identification, biometric verification) and expands the definition of “biometric categorisation” to include inferences derived from biometric data. Finally, the Council adds a r definition of “general purpose AI”, which covers image and speech recognition systems that could constitute biometric data in a relevant context.

Some of these definitions are familiar from the GDPR or past opinions of authorities, whereas others are new and still being developed.

The Good And The Bad Biometrics

Neither the AI Act nor the different EU institutions treat all biometric systems the same. For example, the Commission considers biometric categorisation to be only a “high-risk” AI system, whereas the EP considers it to pose an unacceptable risk and bans it (with certain excepted use cases for therapeutic purposes). On the other hand, the Council has taken biometric categorisation systems out of high-risk AI systems and only imposes transparency obligations on them.

The strictest approach has come from the EP, which has expanded the list of banned biometric AI systems and upgraded some others into the high-risk category. It has also distinguished biometric verification systems (1:1 matching) from other biometric and biometric-based identification systems (1 to many matching), considering them to be a lower risk AI system in comparison. Hence, one can think of biometric verification systems as being ‘the good biometrics’ to some degree.

Real-time remote biometric identification has been the star of the show for all three institutions’ debates, which strongly varied in views on the carveouts from bans for such AI systems in publicly accessible spaces. The starting point, from the Commission was targeted at banning Law Enforcement use, with three exceptions: finding victims of crime (including missing children), prevention of imminent threats (such as terrorist attacks) and detection and localisation of criminals facing criminal charges punishable by at least three years’ imprisonment.
The Council expanded these carveouts for law enforcement. The conversations also coincided with the French parliament’s plans to deploy facial recognition technology in public spaces for the Paris 2024 Olympics.

The EP proposal text went in the opposite direction of the Council and banned use of real-time remote biometric identification in public spaces altogether (the ban would affect both private and public entities).

We have provided an Appendix at the end of this article comparing the biometric related provisions in the texts from the three institutions.

What About Biometric AI Systems Used For Fraud Prevention In Financial Services?

Much to the glee of financial services institutions, Annex III – paragraph 1 (5)(b) of the EP text provides a carve out for fraud prevention AI systems from the high-risk systems list: “AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score [are high risk], with the exception of AI systems used for the purpose of detecting financial fraud”.

However, it is unclear whether the Act aims to limit this exemption to fraud systems only used in assessing consumer creditworthiness and credit score, or if it would extend to other fields of financial services such as the payments sector, where fraud prevention is also required for strong customer authentication and transaction monitoring.

Recital 37 of the EP Text states “… AI systems provided for by Union law for the purpose of detecting fraud in the offering of financial services should not be considered as high-risk under this Regulation.” However, currently there is no Union law that expressly provides for use of AI for fraud detection in financial services, even though its use is encouraged by some regulators. The draft Payment Services Regulations does contain a recital (recital 103) which says: “To be able to prevent ever new types of fraud, transaction monitoring should be constantly improved, making full use of technology such as artificial intelligence.

Separately, given the EP text classifies “biometric and biometric based systems” to be high risk under Annex III, it is also unclear whether “biometric identification systems used for detecting financial fraud” would get the freedom of other AI systems used in detecting financial fraud, or would be batched with biometric identification systems as high-risk. Whilst the recital 33 makes a distinction between one to many and one to one biometric systems, this distinction is not echoed in Annex III.

Special category data under Article 9 GDPR vs the EP Text on banned and high-risk biometric and biometric-based systems

One of the new amendments included in recital 33 (recital 33a) proposed by the EP shows how special category biometric data under GDPR has influenced the high-risk classification under the EU AI Act.

(33a) As biometric data constitute a special category of sensitive personal data in accordance with Regulation 2016/679 [the GDPR], it is appropriate to classify as high-risk several critical use-cases of biometric and biometrics-based systems. AI systems intended to be used for biometric identification of natural persons and AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those which are prohibited under this Regulation should therefore be classified as high-risk…

However, biometric data does not always constitute special category data under the GDPR, as the Parliament text seems to assume. Article 9(1) GDPR considers biometric data to be special category data only when it is used for the purpose of uniquely identifying a natural person.

Where the Parliament classifies emotion recognition systems as being either high risk or prohibited, the same processing under the GDPR may not even constitute special category data (i.e. because recognising emotion on a face does not necessarily require the unique identification of the individual).

Whilst emotion recognition systems may suggest mental state of the individual, which would constitute health data and be special category data under GDPR, Art.9 would not prohibit the processing because of the biometric data, but rather because of the processing of derived health data.

Similarly, biometric categorisation systems (banned by the EP text) may allow detection of sensitive data such as an individual’s political orientation in some uses (Facial recognition technology can expose political orientation from naturalistic facial images) . Such processing would not be prohibited under the GDPR on the grounds of processing biometric data, but rather because of processing data revealing political opinions, which is special category data.

We would expect this nuance to be raised during trialogue to explain that biometric categorisation and emotion recognition are not necessarily special category data (prohibited processing) under the GDPR but are classified as banned or high-risk practices under the AI Act. This would show a divergence between the two laws which market participants need to respect.

What is next?

Trialogues have begun; it is expected that the final version of the AI Act will be agreed before the end of 2023.

APPENDIX

Banned Biometric AI Systems under the EU AI Act - Comparing the 3 texts

Commission Proposal   EU Council Approach
EP Text  
  • The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:
  • (i)the targeted search for specific potential victims of crime, including missing children
  • (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack
  • (iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State.
  • The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities or on their behalf for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:

    • the targeted search for specific potential victims of crime;
    • the prevention of a specific and substantial threat to the critical infrastructure, life, health or physical safety of natural persons or the prevention of terrorist attacks;
    • the localisation or identification of a natural person for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in Article 2(2) of Council Framework Decision 2002/584/JHA32 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, or other specific offences punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least five years, as determined by the law of that Member State.
  • The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall take into account the following elements:
    • the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;
    • the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.

  • In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives [referred to above] shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.

    • Each use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation provided that, such authorisation shall be requested without undue delay during use of the AI system, and if such authorisation is rejected, its use shall be stopped with immediate effect.

    • The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.

    • A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.

  • The placing on the market, putting into service or use of biometric categorisation systems that categorise natural persons according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics. This prohibition shall not apply to AI systems intended to be used for approved therapeutical purposes on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian.
  • The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces;
  • The placing on the market, putting into service or use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
  • The placing on the market, putting into service or use of AI systems to infer emotions of a natural person in the areas of law enforcement, border management, in workplace and education institutions.

 

High Risk Biometric AI Systems under Annex III

Commission Proposal   EU Council Approach   EP Text
  • Biometric identification and categorisation of natural persons:
  • (a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons

  • Biometrics:
    (a) Remote biometric identification systems.

  • Biometric and biometrics-based systems;

    • (a) AI systems intended to be used for biometric identification of natural persons, with the exception of those mentioned in Article 5 (i.e. banned biometric systems);
    • (aa) AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5; Point 1 shall not include AI systems intended to be used for biometric verification whose sole purpose is to confirm that a specific natural person is the person he or she claims to be.

Latest insights

More Insights
Curiosity line teal background

China Cybersecurity and Data Protection: Monthly Update - December 2024 Issue

17 minutes Dec 23 2024

Read More
featured image

EDPB weighs in on key questions on personal data in AI models

1 minute Dec 20 2024

Read More
Curiosity line yellow background

Australia’s first standalone cyber security law – the Cyber Security Act 2024

Dec 18 2024

Read More