The European Union's Artificial Intelligence Act is on track to become the world's first comprehensive regulation of this emerging technology. As a first mover, and by virtue of the "Brussels Effect," the AI Act may be talked up as one of the global standards for the regulation of AI — much as the EU General Data Protection Regulation has been for the regulation of data protection. Following a series of amendments adopted by the European Parliament in June, the final legislative deliberations of the AI Act, or "trilogue" negotiations, have begun.

Through this EU decision-making process, the three involved legislative bodies — European Commission, Council and Parliament — will aim to produce an agreed version of the text. Each has already carved out their institutional positions on the AI Act in preparation for the "tough" trilogue negotiations that lie ahead. More narrowly, several differences between the European Council (which represents the member states) and Parliament (which represents people's interests) need to be reconciled. Traditionally, in these trilogues, the MEPs' strong emphasis on "the rigid protection of rights in legislative texts" has been counterbalanced by the member states' economic and social interests represented by the European Council.

A brief history (and expected future) of the EU AI Act
April 2021European Commission presents its proposal for the Artificial Intelligence Act.
December 2022European Council adopts its common position (“general approach”) on the AI Act.
June 2023European Parliament MEPs adopt their negotiating position on the AI Act. Trilogues begin.
Late 2023 (expected)Political agreement on the AI Act is reached.
Early 2024 (expected)The finalized AI Act is adopted.
Late 2025 - Early 2026 (expected)Following a likely 18-24-month transition period, the AI Act comes into effect.

In an effort to foresee what the finalized AI Act might look like, this article analyzes several areas around which there has been significant contention leading up to the negotiations: the definition of AI, the list of prohibited AI applications, obligations on high-risk AI, foundational models, and enforcement. The AI Act is a proposal for a regulation, meaning that it will be directly applicable and immediately enforceable in the member states upon entering into force. This makes the definitions laid out in it especially critical, as they will not be subject to differences across national implementing regulations.

The definition of AI

Within any given law, the definitions can be as important as, or function as a legal equivalent to, any of its formal rules. As University of Nevada, Las Vegas, professor Jeanne Frazier Price has written: "the legislative definition empowers; it serves a performative function, investing groups of individuals or instances with rights or obligations."

In realms as complex as AI, entire studies have been devoted to the challenges of defining AI. It is not surprising then that there has been significant contention on definitional questions. While lack of definitional clarity has helped the field to attract investment and "grow, blossom, and advance at an ever-accelerating pace," it has complicated law and policymaking efforts, which often crave definitional clarity, if not simplicity. AI governance professionals can look forward to upcoming IAPP research on how AI is being defined across international jurisdictions.

The European Commission's proposed definition for AI was based on a series of techniques listed in the regulation’s appendix. Both the European Council and Parliament, however, moved the definitions into the body of the text. The MEPs in Parliament also brought the definition of "AI system" under Article 3 in line with the definition developed by the OECD. Other important terminology changes made by the Parliament include referring to users of AI systems as "deployers," as well as adding new definitions for "affected persons," "foundation model" and "general purpose AI system."

The center of the lexical contention within the AI Act lies between two poles: On the one hand, there is a concern the definition of AI may "cast the net too widely" and include things as simple as calculations in a spreadsheet. On the other hand, overly precise definitions can hamper the law's efficacy and hinder the positive development of AI. Indeed, being "future-proof" is especially critical for legislation in a field defined by rapid technological change.

While progress has been made on the definition of AI at each stage of the text, the trilogues may bring further changes to how various terms within the AI value chain are defined.

Prohibited AI systems

Generally speaking, the AI Act takes a risk-based approach to the regulation of artificial intelligence. Indeed, a cornerstone of the law is to classify AI technologies by the level of risk they pose to the health and safety or fundamental rights of a person. The risk tiers or risk categories of the law include unacceptable, high, and limited or minimal.

In general, unacceptable AI systems are prohibited completely by Article 5. These include, but are not limited to, AI systems that use subliminal, manipulative or deceptive techniques to distort a person's behavior, AI systems that exploit the vulnerabilities of a person or group, or AI systems that employ social scoring evaluation or classification of natural persons or groups in a way that leads to their detrimental or unfavorable treatment.

Meanwhile, high-risk AI systems are permitted but subject to the strictest level of obligations — from ensuring that the human operators of high-risk systems are made aware of the risk of automation or confirmation bias, to carrying out a fundamental rights impact assessment.

Finally, limited or minimal-risk AI systems are subject to the lowest level of obligations, primarily involving transparency requirements that allow users to make informed decisions about them.

There has been contention on which AI systems should fall into which risk category. Topping that list of contention is the discussion on where to place biometric surveillance in public places. The European Parliament added "real-time" remote biometric identification systems in publicly accessible spaces to the list of prohibited uses of AI, despite last-minute but ultimately unsuccessful attempts by the center-right European People's Party to introduce derogations to the ban. The EPP's position to allow for this use was aligned with the initial proposal by the European Commission, which was also maintained by the European Council.

While it ultimately added this ban on remote biometric identification systems in public spaces by law enforcement, the European Parliament allowed for its ex-post use with prior judicial authorization. It also introduced a prohibition on biometric categorization systems that use sensitive characteristics, such as gender, race, ethnicity, citizenship status, religion and political orientation.

In sum, the heart of the contention is that the European Parliament is advocating for a broader list of prohibited AI systems, including software that scrapes facial images from the web, while the European Council prefers a narrower one. Given this opposition, it is likely that further changes to Article 5, including around the biometric identification ban and its exceptions, will be made during the trilogues.

Requirements for high-risk AI systems

Within Article 6, the AI Act sets out the obligations for high-risk AI systems. In Annex III, it enumerates a list of eight specific types of AI systems that fall under the definition of high-risk. These top-level categories of high-risk AI systems subject to the strictest obligations under the law are:

  1. Critical infrastructures.
  2. Biometric identification of natural persons.
  3. Educational and vocational training.
  4. Employment/workforce management.
  5. Essential private and public services.
  6. Law enforcement.
  7. Border control.
  8. Administration of justice and democratic processes.

Within each of these categories of high-risk AI systems are numerous sub-categories. The initial list of these proposed by the European Commission has undergone significant revision by both the European Council and Parliament. For example, in its amendments, the European Parliament added to the high-risk classification (of Annex III) AI systems that are used by certain social media platforms (i.e., those designated as "very large online platforms" under the Digital Services Act) to generate recommendations for users, as well as AI systems used with the intention of influencing the outcome of an election or referendum.

There remains significant ambiguity around the methodology for determining which specific AI systems should be classified as high-risk. The Initiative for Applied Artificial Intelligence, for instance, conducted a risk classification study that assessed over 100 AI systems. Overall, it found the risk level to be unclear for about 40% of them. The report cited critical infrastructures, employment and law enforcement as being the three main causes of unclear risk classifications. By comparison, about 1% of AI systems were classified as prohibited, 18% as high-risk, and 42% as low/minimal risk.

Requirements for foundation models

The question of what qualifies as a high-risk AI system has also been further complicated by the decision of the European Parliament to impose a regime on foundation models that "largely draws from the one for high-risk AI applications, notably in risk management and data governance." Indeed, one of the most significant proposed changes to the text of the AI Act made by Parliament was to bring "providers of a foundation model" (defined in Article 3(1)) within the scope of various obligations beyond the minimal transparency requirements.

Generative AI, including large language models, or LLMs, such as ChatGPT and Google Bard, are a subset of foundation models, which themselves are a subset of general purpose AI. These new obligations on foundation models/LLMs introduced by Parliament within Article 28b(2) include requirements to:

  • Demonstrate "the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law ..." using "appropriate design, testing and analysis."
  • Apply certain data governance measures "to examine the suitability of the data sources and possible biases and appropriate mitigation."
  • Involve independent experts, document the analysis, and do "extensive testing" to achieve "appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity."
  • Design the foundational model with the capability to measure its consumption of energy and resources and its environmental impact.
  • Create "extensive technical documentation and intelligible instructions" that would enable compliance with Articles 16 and 28(1) by downstream providers.
  • Establish a "quality management system" that would ensure and document compliance with Article 28.
  • Register the foundational model with the EU database for high-risk AI systems.

By contrast, the European Council's approach toward foundational models had been to request the European Commission create tailored obligations for them a year and a half after the AI Act's entry into force. However, with the European Parliament proposing a more "elaborate" and explicit approach to them (as documented above), the precise set of obligations for foundational models is also likely to be a point of contention within the trilogues.

Enforcement

A final point of contention that is likely to influence the outcome of trilogues concerns the enforcement of the AI Act and the coordination among various national and EU authorities. Architecturally, the AI Act resembles the GDPR in that it will bring various competent national authorities together on an Artificial Intelligence Board, similar in function to the European Data Protection Board. The European Parliament's version of the AI Act would also establish a new EU body called the AI Office (Article 56), which would be equipped with an array of administrative, consultative, interpretive and enforcement-related powers, as well as responsibility for coordinating cross-border investigations.

In its amendments, the European Parliament made substantive revisions to the fines that can be levied under the AI, increasing the fines for noncompliance with Article 5 (on prohibited AI systems) to 40 million euros or, for companies, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher. A lower tier of administrative fines (20 millions euros or 4% of global turnover) would apply to Article 10 (data governance) and Article 13 (transparency) violations, while other violations would be subject to fines of 10 millions euros or 2% of global annual turnover.

But, perhaps the most interesting enforcement dynamics of the AI Act are not to be found anywhere in the text; rather, they concern the increasing role of — and enforcement by — existing European data protection supervisory authorities. Indeed, as IAPP Vice President and Chief Knowledge Officer Caitlin Fennessy, CIPP, has observed, data protection authorities "have been some of the first to launch investigations of AI-based products and services, drawing on their experience and the more-established privacy rulebook."

Recall that Italy's data protection authority, the Garante, temporarily suspended ChatGPT over privacy concerns in April. While the Garante subsequently lifted its ban, that was likely the catalyst for the European Data Protection Board's subsequent decision to launch a task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities."

France's DPA, the Commission nationale de l'informatique et des libertés, has also positioned itself as an AI enforcement authority. In its "Action plan on AI" for developing "privacy-friendly AI systems," the CNIL notes it will pay particular attention to whether actors who use personal data to develop, train, or deploy AI have carried out a DPIA and taken measures to inform people about the exercise of their rights.

And, at the close of their roundtable meeting in June, DPAs of the G7 issued a joint statement on generative AI and the "present and potential harms" it presents to privacy and data protection. Echoing a point made by many lawmakers and regulators in the U.S. and globally, the statement emphasized that "current law applies to generative AI products and uses, even as different jurisdictions continue to develop AI-specific laws and policies."

Given these dynamics playing out inter-institutionally and internationally, the enforcement mechanisms of the AI Act — and the role of data protection supervisory authorities in this equation — are likely to be contentious not only during trilogues, but long after the law comes into effect as well.

After trilogues

As the EU AI Act trilogues proceed, contention remains around the questions of how the law should define AI, which AI applications should be prohibited, what obligations should be placed on different categories of permitted AI, and what role DPAs should play in AI enforcement.

As they progress, other issues may emerge as points of contention as well. According to Morrison Foerster, in addition to the definition and risk classification of AI systems, "the interplay between existing laws and the AI Act to avoid double regulation" will be another focal point of discussion during the trilogues. Writing for Kluwer Competition Blog, Muhammed Demircan of the Vrije Universiteit Brussel expects there to be objections to the EP’s amendments to the AI Act, particularly around the obligations of deployers, "as these obligations bring a significant burden on usual and ordinary businesses of the Member States, who would 'buy' a high-risk AI system for various purposes." In July, Euractiv reported that the innovation provisions and fundamental rights impact assessment were among the issues that still demanded lawmakers' attention.

As the final text takes shape, some applicable guidance is already available — from the practical steps companies should take to launch a AI governance program, to how they can utilize AI model cards in service of the AI Act's transparency principles. Such guidance will only grow in demand as the final version of the text materializes.

Due to the "high-stakes, hot button" nature of the AI Act, legislators intend to reach a deal on the text by the end of the year or, at the latest, before the 2024 European Parliament election scheduled in June. While there are no easy solutions to the contentious issues infusing the AI Act, it's entirely within the realm of possibility, and a much hoped for outcome, that they will be resolved by that time.