The European Union has been working on the world's first comprehensive law to regulate artificial intelligence. The file is approaching the finish line two years after the legislative proposal was presented.

The EU AI Act has the potential to become the international benchmark for regulating the fast-paced AI field, much like the General Data Protection Regulation inspired data protection regimes in countries worldwide, from Brazil to Japan to India.

"We are on the verge of building a real landmark legislation in the digital landscape, not just for Europe but for the entire world," Brando Benifei, one of the members of European Parliament spearheading the file, told his colleagues ahead of the key committee vote 11 May.

The committee vote confirmed the compromise text negotiated by Benifei and the other co-rapporteur, Dragoș Tudorache. The file still needs to undergo the test of the plenary vote in mid-June, but the text is largely consolidated.

Parliament will be the last co-legislator to reach a position on the file, as the EU Council of Ministers endorsed a general approach in December. Thus, interinstitutional negotiations are set to kick off at the end of June, but the dividing lines are already evident.

AI definition

The definition of AI was a critical point for discussion, as it determined the scope and future-proofness of the draft law. Initially, the European Commission defined AI based on a series of techniques listed in the annex to the regulation, meaning the EU executive could single-handedly update it.

Remarkably, both the EU Council and Parliament moved the definition from the annex to the body of the text, considering such a fundamental aspect of the legislation could not be changed with a stroke of a pen without a deliberative process. MEPs aligned the definition with the Organisation for Economic Cooperation and Development. As the OECD was already working on tweaking its definition, Parliament sought to anticipate the new wording.

EU countries were concerned the AI Act would cast the net too widely and cover traditional software applications. The definition was narrowed to systems using machine learning, logic or knowledge-based approaches.

Parliament "addressed a key shortcoming of the commission's draft by fixing the definition of AI systems. The new definition acknowledges that there are AI systems which are not given specific objectives," said Kris Shrishak, a technologist at the nonprofit Irish Council for Civil Liberties.

General purpose AI

The massive popularity of ChatGPT brought the potential of large language models to the forefront of AI Act discussions. What to do with AI systems that do not have specific objectives, better known as general purpose AI, was put on the table by France months before the chatbot's release.

The EU Council's approach kicks the can down the road, requesting the European Commission tailor obligations for these AI models within a year-and-a-half from the regulation's entry into force, based on a public consultation and impact assessment.

Parliament's approach is much more elaborate. GPAI is not covered under the AI Act by default. The bulk of the obligations would fall on the economic operator that substantially modifies the system. At the same time, the GPAI provider would have to give all the relevant information to support the downstream operator's compliance.

MEPs proposed tighter rules for foundation models, a subcategory of GPAI, based on a concept developed by Stanford University. The distinction lies in the models' adaptability, training datasets and capacity to be used beyond the intended purpose.

The regime for foundation models largely draws from the one for high-risk AI applications, notably in risk management and data governance. In addition, the system's robustness must be proved throughout its lifecycle via external audits.

Further requirements were put on generative AI, a subcategory of foundation models, particularly in transparency. Users should be informed if the content is AI-generated and providers must disclose a detailed summary of the training data covered by copyright law.

Banned applications

There is a consensus in the EU that AI uses posing unacceptable risk, like social scoring and manipulative techniques, should be banned altogether. But deciding where to draw the line is highly political, especially concerning law enforcement.

European governments have been historically keen to give their police agencies leeway to fight crime. By contrast, MEPs are more concerned with potential abuses and protecting fundamental rights. The AI Act will be another milestone in this eternal struggle.

The initial proposal limited the use of real-time biometric identification systems to specific situations like terrorist attacks and kidnapping. This approach was maintained in the EU Council's position.

Conversely, in Parliament, the majority was for a complete ban of biometric identification systems, only allowing ex-post use for serious crimes following judicial approval. This curtailing was introduced despite resistance from the conservative European People's Party, which also has a strong law enforcement faction.

The MEPs further banned biometric categorization, predictive policing and software that scrapes facial images from the internet to build databases like Clearview AI. AI systems for emotion recognition were also forbidden in law enforcement, border management, workplaces and educational institutions.

High-risk AI uses

The AI Act follows a risk-based approach, meaning systems deemed to pose significant risk must follow a stricter regulatory regime. While this categorization was initially automatic, both the European Council and Parliament introduced an extra layer to only capture AI applications that are indeed high risk.

However, Parliament's text goes as far as to give AI providers the possibility to launch their systems on the market if they think it is safe. Still, they could incur sanctions if the competent authority thinks they are wrong.

Both institutions significantly amended the list of high-risk uses. The European Council removed law enforcement applications such as deep-fake detection, crime analytics and systems to authenticate documents.

By contrast, Parliament included more use cases in law enforcement, border control and the administration of justice. Meanwhile, the wording was more precise for critical areas such as employment, education, infrastructure and access to essential services.

MEPs added recommender systems used by social media deemed very large online platforms under the Digital Services Act to the list of high-risk applications.

Compliance requirements

Providers of AI systems deemed high risk must comply with obligations related to risk management, data governance and technical documentation. Both EU institutions amended the obligations significantly, making them clearer but also more prescriptive.

A general rule for these systems is that at least two people should review their outputs. The European Council introduced an exception for this 'four-eyes' principle for AI solutions used in border control.

The member states also clarified the quality management systems required in the AI Act could be integrated into those already established to comply with other EU regulations, for instance in the financial sector.

In Parliament, an extensive discussion of a provision allowing AI developers to process sensitive data such as race and religious beliefs to identify potential biases was held. The measure was maintained, but the conditions for this to happen were tightened.

The MEPs also introduced a requirement for fundamental rights impact assessments for all users of high-risk AI systems, to consider the potential impact on vulnerable groups. This measure is being rolled out in the Netherlands following a national scandal that saw thousands of families wrongly accused of child-benefit fraud due to a flawed algorithm.

For the Netherlands' Minister for Digitalisation Alexandra van Huffelen the idea is to integrate the new impact assessment on human rights with the data protection one mandated under the GDPR.

"In line with the EU data protection philosophy, our approach is centred around the purpose for which the data is used," van Huffelen said.

Enforcement

The enforcement architecture of the AI Act resembles the GDPR, with the main competencies attributed to national authorities brought together on an AI board that will ensure a consistent application across the bloc.

The parallel with the GDPR is even more relevant, considering data protection authorities, like France's Commission nationale de l'informatique et des libertés, have been positioning themselves to be entrusted with the role of the AI enforcer.

Both the co-legislators wanted to introduce some centralizing elements. The EU Council explicitly borrowed elements from the European Data Protection Board, such as establishing a pool of experts.

On the other hand, MEPs want to establish an AI office, an EU body that might be upgraded into an agency when the EU budget allows for more room for maneuver. Precisely due to budgetary constraints, the role of the AI office was limited to supporting cross-border investigations. Still, it would be more independent than the AI board with an executive director.

"My number one target for further improvement is enforcement and governance. We need to learn from the GDPR's shortcomings and prevent things like contradicting definitions or different interpretations of key terms," conservative MEP Axel Voss said.