The issue of regulating biometric artificial intelligence systems dominated the June 2023 debates in European Parliament and has been a contentious issue since the European Commission first published the proposed Artificial Intelligence Act.
There are significant differences between the approaches of the European Parliament, European Commission and the Council of the European Union.
Looking at the number of biometric data-related definitions, it is evident the AI Act emphasizes systems using biometric or biometric-based data and has a more sophisticated approach to regulating these systems. The EU General Data Protection Regulation only has one definition relating to biometrics — biometric data under Article 4(14)). On the other hand, the European Commission's proposal for the AI Act contains six biometrics-related definitions: biometric data, emotion recognition system, biometric categorization system, remote biometric identification system, real-time remote biometric identification system and post-remote biometric identification system. The European Parliament position paper adds three further definitions: biometric-based data, biometric identification and biometric verification. It expands the definition of "biometric categorization" to include inferences derived from biometric data. Finally, the Council of the EU defines "general purpose AI," which covers image and speech recognition systems that could constitute biometric data in a relevant context.
Some of these definitions are familiar from the GDPR or past opinions of authorities, whereas others are new and still being developed.
The good and bad biometrics
Neither the AI Act nor the different EU institutions treat all biometric systems the same. For example, the European Commission considers biometric categorization to be only a "high-risk" AI system, whereas the European Parliament considers it to pose an unacceptable risk and bans it, with certain excepted use cases for therapeutic purposes. On the other hand, the Council of the EU took biometric categorization systems out of high-risk AI systems and only imposes transparency obligations on them.
The strictest approach has come from the European Parliament, which expanded the list of banned biometric AI systems and upgraded others to the high-risk category. It also distinguished biometric verification systems (one-to-one matching) from other biometric and biometric-based identification systems (one-to-many matching), considering them lower risk AI systems. Therefore, one can think of biometric verification systems as "the good biometrics."
Real-time remote biometric identification has been the star of the show for all three institutions' debates, which strongly varied in views on the carveouts from bans on such AI systems in publicly accessible spaces. The European Commission's starting point was targeted at banning law enforcement use with three exceptions: finding victims of crime, including missing children, preventing imminent threats such as terrorist attacks, and detecting and localizing those facing criminal charges punishable by at least three years' imprisonment.
The Council of the EU expanded these carveouts for law enforcement. The conversations coincided with the French Parliament's plan to deploy facial recognition technology in public spaces for the Paris 2024 Olympics.
The European Parliament's proposal text went in the opposite direction of the Council of the EU, banning the use of real-time remote biometric identification in public spaces altogether. The ban would affect both private and public entities.
What about biometric AI systems used for fraud prevention in financial services?
Much to the glee of financial services institutions, Annex III – paragraph 1 (5)(b) of the European Parliament text provides a carve-out for fraud prevention AI systems from the high-risk systems list: "AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score (are high risk), with the exception of AI systems used for the purpose of detecting financial fraud."
However, it is unclear whether the act aims to limit this exemption to fraud systems only used to assess consumer creditworthiness and credit score, or if it would extend to other fields of financial service, such as the payments sector, where fraud prevention is also required for strong customer authentication and transaction monitoring.
Recital 37 of the European Parliament text states, "AI systems provided for by (EU) law for the purpose of detecting fraud in the offering of financial services should not be considered as high-risk under this Regulation." However, currently, no EU law expressly provides for the use of AI for fraud detection in financial services, even though its use is encouraged by some regulators. The draft Payment Services Regulations contains a recital (recital 103) that says: "To be able to prevent ever new types of fraud, transaction monitoring should be constantly improved, making full use of technology such as artificial intelligence."
Separately, given the European Parliament text classifies biometric and biometric-based systems as high risk under Annex III, it is unclear whether "biometric systems used for detecting financial fraud" would get the freedom of other AI systems used in detecting financial fraud or be batched with biometric systems as high-risk. While Recital 33 makes a distinction between one-to-many and one-to-one biometric systems, this distinction is not echoed in Annex III.
Special-category data under Article 9 GDPR vs the European Parliament text
One of the new amendments proposed by the European Parliament in Recital 33 (Recital 33a) shows how special category biometric data under cthe GDPR has influenced the high-risk classification under the EU AI Act.
As biometric data constitute a special category of sensitive personal data in accordance with (GDPR) Regulation 2016/679, it is appropriate to classify as high-risk several critical use-cases of biometric and biometrics-based systems. AI systems intended to be used for biometric identification of natural persons and AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those which are prohibited under this Regulation should therefore be classified as high-risk.
However, biometric data does not always constitute special-category data under the GDPR, as the European Parliament text seems to assume. GDPR Article 9(1) considers biometric data to be special-category data only when it is used for the purpose of uniquely identifying a natural person.
Where the European Parliament classifies emotion-recognition systems as being either high risk or prohibited, the same processing under the GDPR may not even constitute special-category data because recognizing emotion on a face does not necessarily require the unique identification of the individual.
While emotion-recognition systems may suggest the mental state of the individual, which would constitute health data and be considered special-category data under the GDPR, Article 9 would not prohibit the processing because of the biometric data but rather because of the derived health data.
Similarly, biometric categorization systems — banned by the European Parliament text — may allow the detection of sensitive data such as an individual's political orientation in some uses. Such processing would not be prohibited under the GDPR on the grounds of processing biometric data but rather because of processing data revealing political opinions, which is special-category data.
We expect this nuance to be raised during trilogue negotiations, to explain biometric categorization and emotion recognition are not necessarily special-category data, or prohibited processing, under the GDPR but are classified as banned or high-risk practices under the AI Act. This shows the divergence between the two laws, which market participants need to respect.
What is next?
Trilogues have begun and the final version of the EU AI Act is expected to be agreed upon before the end of 2023.