After three days of intense negotiations, the European Union reached a political agreement 8 Dec. on the Artificial Intelligence Act, which would be the world's first comprehensive regulation of AI.
The trilogue process between the European Commission, Council of the European Union and European Parliament stretched on for more than 32 hours over the course of a three-day period last week, with negotiators announcing the deal late Friday night.
European Commission President Ursula von der Leyen welcomed the agreement, calling it historic, and noting it will have a global impact: "Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI."
"This is a historic achievement, and a huge milestone for the future," said Carme Artigas, the Spanish secretary of state for digitalization and AI, adding, "in this endeavor, we managed to keep an extremely delicate balance: boosting innovations and update of (AI) across Europe whilst fully respecting the fundamental rights of our citizens."
Both parliamentary co-rapporteurs of the act spoke about the agreement, as well. Brando Benifei said "the effort was worth it," and that "correct implementation will be key." Dragoș Tudorache, who spoke with the IAPP about the negotiations last month, noted the AI Act "protects our (small and medium-sized businesses), strengthens our capacity to innovate and lead in the field of AI, and protects vulnerable sectors of our economy."
High-level elements in provisional agreement
The announcement came nearly a month after negotiations came to a halt, when members of Parliament walked out of a meeting with the Council after member countries requested foundation models be exempt from regulation. After the Spanish presidency of the Council stepped in to help put negotiations on track, all eyes focused on the trilogue that started 6 Dec.
The first tranche of negotiations centered on regulating foundation models and establishing an agreeable governance structure, while the second marathon involved negotiations related to national security and law enforcement exemptions.
General purpose AI: After more than 22 hours of negotiations beginning 6 Dec., the co-legislators agreed on rules for general purpose AI, which includes transparency requirements. More powerful models that could cause systemic risk would be required to comply with an additional layer of obligations around risk management, monitoring of serious incidents, model evaluation and red teaming. According to the Commission, these new obligations will be implemented through codes of practices developed by industry, civil society, the scientific community, among others.
Governance: At the national level, governance of the act will be exercised by national competent market surveillance authorities. At the EU level, a new AI Office, housed within the Commission, will work to coordinate governance among member countries and supervise the enforcement of the rules related to general purpose AI.
Prohibitions: Legislators agreed on the unacceptable risk category, which means systems that will be banned. The systems falling under that classification include those that manipulate human behavior affecting free will, social scoring, and "certain elements of predictive policing." Emotion-recognition technology in the workplace and school systems will also be prohibited. Remote biometric identification in public will be banned with specific exemptions for law enforcement, an issue that led to intense negotiations during the final trilogue.
High-risk systems: Systems identified as high risk — including critical infrastructure, medical devices, law enforcement, administration of justice and democratic processes, among others — will be required to implement risk mitigation, ensure datasets are high quality, log activity with detailed documentation, involve human oversight, clear information for users, and cybersecurity. Authorities will facilitate testing of systems with organizations in regulatory sandboxes.
Human rights impact assessment: Reviews will be required by deployers of high-risk AI systems prior to launch.
Transparency: AI systems such as chatbots will be required to inform users they are interacting with a machine. Deepfakes must be labeled, and users must be informed when a biometric categorization or emotion recognition system is used.
Fines: There are different levels of fines. Violation of unacceptable risk could lead to up to 7% of global annual turnover, or 35 million euros (whichever is higher), 3% or 15 million euros for high-risk systems, and 1.5% for disclosing inaccurate information.
What's next?
The latest text of the legislation has yet to be published. Co-legislators still have work to finalize the text. According to Euractiv's Luca Bertuzzi, who spoke with the IAPP on The Privacy Advisor Podcast, there are as many as four technical meetings scheduled this week, with others in the queue for early January.
With parliamentary elections looming next year, there is still pressure to get the text finalized, translated into dozens of languages, and published in the Official Journal. Once that is complete, the AI Act would enter into force 20 days after and become applicable after two years. However, prohibited use goes into effect after 6 months and general purpose AI rules after one year.
In the meantime, the Commission plans to launch an "AI Pact," which will bring together developers from around the world to create and commit to a voluntary basis "to implement key obligations of the AI Act ahead of the legal deadlines."
Stakeholders offer reaction
In comments provided to the IAPP, Considerati Managing Director Cornelia Kutterer, who also serves on the IAPP AI Governance Center Board, said the provisional agreement "is an important step towards a safer and transparent AI model ecosystem."
"Implementation of the AI Act will necessitate building administrative and regulatory capacities across public-sector administrations and companies," she said. "Companies and public administration will have to develop new competencies, possibly including infrastructural changes, to comply with the Act's requirements. Initiatives for training, awareness, and capacity-building will be vital to ensure stakeholders at all levels are equipped to meet the Act's demands."
Though he characterizes the provisional agreement "to be an imperfect attempt to regulate AI," Qantm AI CEO Seth Dobrin, also an AIGC board member, acknowledges that "it is better than nothing."
For others in industry, the sheer size of the regulation and collective inexperience for both companies and regulators will be challenging. "What we have is a regulatory behemoth three times the size of the General Data Protection Regulation," said DIGITALEUROPE's Alberto Di Felice, CIPP/E, who also said there will not be enough people, resources or standards in place ahead of the two-year window.
"Because this is placement-on-the-market legislation, that means a lot of innovative products won't make it to market," Di Felice said. "That's not only chatbots, but also critical stuff like medical devices and industrial machines. I expect this to be the beginning of our process to realize how onerous this will all be, and of hard work in the coming years to solve some of the challenges — and possibly correct some of the mistakes we’ve made."
Kutterer, however, shared thoughts on some of the practical implications that will stem from the regulation, including how it will integrate with existing EU regulations. "The AI Act does not interact in isolation," she said. "Compliance teams will need to look out for its interaction with GDPR but also other newer regulations such as the Data Act, DSA and DMA. An assessment of its impact on European innovation, research and SME will take time."
There will also be a lot of work for companies using high-risk applications. "For those products and services that are considered high-risk, efforts along the value chain will be required," said Kutterer. "Technical documentation requirements, risk assessment requirements and human impact assessments will interact with existing data protection impact assessments, and in the case of product safety requirements with existing safety requirements."