Nearly three years after the European Commission first proposed a regulatory framework for artificial intelligence, the European Parliament put its final stamp on the EU AI Act. Parliament voted 523-46 in favor of the proposed regulation 13 March.
The passage is a landmark in the conversation around AI regulation. The pending regulation creates tiers of risks and requirements for how AI technologies within each category need to be operated. It also requires human oversight and data governance of systems, as well as technical documentation of how those systems work so they can be understood.
The AI Act takes effect 20 days after its publication in the Official Journal. There are several staggered deadlines that will govern when certain provisions take effect.
The approval by EU lawmakers was largely seen as a procedural vote, preceded by a unanimous vote from EU member states in February. After holdouts France and Germany — which are both home to Europe's larger AI companies, Mistral and Aleph Alpha — relented, along with Austria and Italy, the bill was all but assured to pass.
EU officials were celebrating the passage on social media and in press conferences prior to the vote.
"Today is again a historic day on our long path towards regulation of AI," said Italian MEP Brando Benifei, a co-rapporteur of the act, speaking before the vote. "With the final step that will take place in council soon, the law will become a law of the European Union — the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI."
But the final vote is just the beginning of a long road of further rulemaking, delegating and legalese wrangling.
Prohibitions on AI with "unacceptable" levels of risk do not kick in until six months after the act is published. It will be a full year before the rules around how general purpose AI take effect, and another two years after that before all rules of the act and obligations for high-risk systems apply.
Still, the EU can stake a claim as an AI regulator as other countries seek the best path to govern the nascent technology. EU Commissioner for Internal Market Thierry Breton nodded to the significance of those rules on social media, predicting it would become a model others would follow.
"Europe is NOW a global standard-setter in AI," he said on LinkedIn. "We are regulating as little as possible — but as much as needed!"
What happens next
After clearing Parliament, the AI Act text needs to be translated into various languages and errors within the text need to be cleaned up. That is the job of the lawyer linguists. European Parliament Senior Policy Advisor Laura Caroli indicated approval of the text final text will likely happen sometime in April, putting the entry into force date closer to May.
"We would have hoped for this to be faster but it is not only up to us and surprisingly this is not the only important file in this legislative term," she wrote a week before the vote.
Standing up of the body that will govern the act, the European AI Office, is the next domino to fall. Its immediate tasks will be setting up advisory bodies, developing benchmarks for evaluating capacities and drawing up codes of practice. A board will advise on the act's implementation and issue opinions when challenges arise.
A website for the office announcing its creation went live in January and job postings are starting to pop up, but when the office will begin its work is unknown for now.
Stakeholders also do not need to wait until the various deadlines approach to get involved, according to Kai Zenner, the head of office and digital policy advisor to MEP Axel Voss. Interested parties can look to join standardization bodies and reach out to the European Commission, which now has the task of developing guidelines and crafting delegation and implementation acts — what Zenner called "secondary legislation."
"The AI Act can still improve and can be made more specific, and so on," he said.
Zenner also encouraged companies to take a proactive approach by becoming familiar with how the AI Act will affect their work now, rather than waiting until deadlines arrive.
Looking ahead, reflecting back
For some, the vote to approve was a time for celebration.
In a joint press conference, co-rapporteurs Dragoş Tudorache and Benifei characterized the act as a first crucial step in getting a global standard on AI. Tudorache said regulators had given "a signal to the world" that they would be taking AI seriously, and should do the same.
"Now we have to be open to work with others in how we will be promoting this model towards other jurisdictions that might be interested to follow suit," he said.
Tudorache added that regulations should be flexible to prepare for when AI changes, noting the limits of its capabilities are still unknown.
The final approval did not come without stakeholder disagreement.
One point of ongoing contention is the use of biometric cameras. A total ban on law enforcement's use of untargeted AI-powered facial recognition technology was a key sticking point for the Center for Democracy and Technology.
"This is a law which is supposed to protect people's most basic human rights and yet it seems to be allowing, through its exemptions, the most nefarious kind of AI, one which invades the right to privacy of often the most marginalised and vulnerable groups," CDT Counsel and Director of the Equity and Data Programme Laura Lazaro Cabrera wrote in a Euractiv op-ed.
Benifei said the Parliament's initial position on the cameras — to ban them completely — needed to shift to get to a compromise most parties could agree on.
"We are convinced that with this text, there is no risk at all of mass surveillance because we have put extremely strict safeguards due to a very hard negotiation," he said. "We didn't make the council very happy with this very hard stance on this topic, but in the end, we found an agreement."
Zenner said some vague elements of the text were never resolved, including the list of high-risk systems in Annex III. That, combined with the complicated governance system within the act, made him fear smaller companies may choose to avoid investing in AI altogether.
But Zenner also indicated there was "huge" political pressure to get the act passed from people who fear the impacts AI can have on society and those who want innovation to move as quickly as possible. With so much pressure to finish the act prior to elections, he was skeptical a better product could have been achieved.
"On some aspects, it's better we have now a not-so-good or not-so-perfect text, than nothing," he said. "But if this means that this not-so-good text is now staying as it is for for many, many years, then I think the negative effects will be stronger than the positive effects."