MEPs reach preliminary political agreement on AI omnibus

A political agreement among MEPs on AI Act amendments heads for committee votes 18 March.

Contributors:
Joe Duball
News Editor
IAPP
Lexie White
Staff Writer
IAPP
Members of European Parliament are moving toward finalizing a political agreement on amendments to the EU Artificial Intelligence Act. The preliminary deal among MEPs reached during a shadow meeting 11 March will be reflected in a report that will be voted on by the Committee on Civil Liberties, Justice and Home Affairs and the Committee on Internal Market and Consumer Protection 18 March.
The European Commission's Digital Omnibus on AI is part of the broader digital simplification package the EU is considering to foster competitiveness and innovation. The AI package was separated from proposed changes to other digital rules to expedite consideration of AI Act amendments that might impact upcoming legal deadlines, with requirements related to high-risk systems, transparency and governance take force 2 Aug.
The preliminary compromise reportedly contains notable extensions to compliance deadlines for high-risk requirements. According to a press release from MEPs, "requirements for systems listed in Annex III would apply from 2 December 2027, while those in Annex I would apply from 2 August 2028."
"The aim is to provide legal certainty and allow more time for technical standards, guidance and national authorities to prepare," MEPs said.
The agreement and subsequent report also features "clearer conditions for using sensitive personal data to detect and correct bias in high-risk systems, under strict safeguards," and measures to ban AI systems from generating nonconsensual explicit deepfakes.
In an email to the IAPP, Irish MEP and AI Omnibus rapporteur Michael McNamara said, "some technical negotiations are still ongoing" leading up to the 18 March vote.
Deepfake ban
According to Politico, MEPs agreed to provisions banning an AI system that "alters, manipulates or artificially generates realistic images or videos so as to depict sexually explicit activities or the intimate parts of an identifiable natural person, without that person's consent."
Though the proposed ban would allegedly not be applicable to companies "who have put effective safety measures (in place) to prevent the generation of such depictions and to avoid misuse."
The proposed prohibitions come after the EU launched an investigation into social platform X's AI tool Grok's alleged ability to create and share AI-generated explicit deepfakes of users, including children. X announced it has implemented safety measures to "geoblock" explicit AI-generated content in "jurisdictions where such content is illegal."
The U.K. made similar efforts to prevent AI-generated deepfakes after the incident involving Grok's systems, with proposed amendments to the Crime and Policing Bill that would prevent AI tools from creating harmful or illegal content.
EU lawmakers noted the ban aims to advance consumer protections and expand children's online safety efforts. German member of the European Parliament Sergey Lagodinsky told Politico the EU's efforts "are not only about Grok. It is about how much power we are willing to give AI to degrade people."
Stakeholder pulse check
While the new compliance dates for high-risk requirements are welcome, Information Technology Industry Council Policy Director, Europe, Marco Leto Barone told the IAPP there are "worrying signals" emerging from the political agreement.
"The agreement rolls back several helpful provisions in the Commission's proposal. Particularly, shortening the grace period for the AI Act's generative AI transparency requirements to only 3 months will result in legal uncertainty and create compliance burden," he said.
Barone said the decision to reinstate registration requirements for certain non-high-risk AI systems ultimately misses "an opportunity for meaningful simplification."
Forty-eight EU-based trade associations wrote MEPs and the Council of the European Union outlining the need for additional regulatory rollback in the AI Omnibus, noting there is still work to be done to ensure "unnecessary regulatory burdens are removed from Europe's industrial base and digital companies." They argued for immediate delays on 2 Aug. deadlines while proposing exemptions on AI Act requirements for organizations that are covered by AI rules under existing sectoral frameworks.
"The fast-paced negotiations on the AI omnibus risk becoming a missed opportunity to address the challenges industrial companies, from healthcare and manufacturing to energy and automotive, face when implementing the AI Act in practice," the letter stated. "Many companies are already regulated under robust sectoral frameworks but are now caught in a double or even triple layer of regulation, and classified as high-risk under the AI Act despite existing sector-specific oversight."

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Joe Duball
News Editor
IAPP
Lexie White
Staff Writer
IAPP



