Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Omnibus is the word that has been on everyone's mind lately. It could just be the circles I travel in, or my algorithms, but omnibus content seems to be popping up everywhere. So, I wanted to highlight some of the content that has caught my eye and share some specifics about what I'll be following as the debates around these proposals evolve.
What to watch and read
It's been nearly a month since the European Commission put forth proposals to reshape Europe's digital future. This three-part package includes proposals for reforming key digital rules for privacy, cybersecurity and artificial intelligence via two Digital Omnibus packages, a Data Union Strategy and a robust digital identity program to improve business transactions through a Digital Business Wallet.
These reforms were proposed while many members of the global privacy and AI governance community were in Brussels for the IAPP Data Protection Congress 2025. It could not have been a better time to be surrounded by practitioners and regulators contemplating what these changes could mean to them.
Given their potential significance, our team has dug deep into what is in these proposals. For early reactions and highlights, check out the LinkedIn Live I participated in with colleagues IAPP Managing Director, Europe, Isabelle Roccia, CIPP/E, and IAPP Research and Insights Director Joe Jones. IAPP Editorial Director Jed Bracy speaks with Laura Caroli, who helped negotiate the EU AI Act, on the latest edition of The Privacy Advisor Podcast.
Joe Jones and IAPP Principal Researcher, Privacy Law and Policy Müge Fazlioglu, CIPP/E, CIPP/US, summarize the key components and outline the rationale for taking steps toward digital simplification. IAPP European Operations Coordinator Laura Pliauškaitė discusses the Data Union Strategy, while Roccia looks at the challenges to achieve simplification when thinking about this digital intersectionality.
Since you will get a great overview of the Digital Omnibus in the above articles — and the additional links listed below — I wanted to take some time to highlight the AI aspects I will be watching.
AI omnibus implications
Whether you see these proposals as a necessary clarification or a potential step back from some aspects of the AI Act, all AI governance professionals who must comply with the act should be watching closely.
Timing. The proposed timing for enforcement of high-risk systems under the AI Act has seen the most discussion. The proposal would adjust implementation deadlines for high-risk AI systems no later than December 2027 for Annex III and August 2028 for Annex I.
Here, the interesting component for me has been the rationale. The argument is to delay enforcement due to pending standards. However, these standards are voluntary, so there are other ways to demonstrate compliance.
This highlights the importance of standards and the need for additional clarification and guidance given the act's comprehensive nature.
There are outstanding questions I will be looking to have answered. Will the deadline continue to be delayed if standards aren't completed? What will be an acceptable completion of standards for enforcement to begin, since the idea is that standards will continue to be developed and evolve? Will there be greater emphasis on augmentation of sectors that intersect with the AI Act?
The biggest challenge for practitioners currently is what to do. Is it necessary for those developing or deploying AI systems to prepare to meet the original August 2026 deadline? Or will this extension be approved through the trilogue process?
Expanded acceptable use of personal data. The Omnibus proposes to replace the current Article 10 (5) in the act with a new Article 4a. The proposal significantly broadens the ability for providers and deployers to allow providers and deployers of non-high-risk AI systems to use special categories of personal data to detect and correct bias. Previously, this was only permitted for providers and deployers of high-risk systems.
The proposal also intends to change the threshold from "strictly necessary" use of personal data to "necessary use." This one-word change could greatly expand the scope of what is acceptable use of personal data for training of AI systems. Finally, it provides a legal basis for a broader interpretation of EU General Data Protection Regulation exceptions like public and legitimate interest.
While the intent of reducing bias from all AI systems is a worthy one, the mechanism to do so does again raise the tension of how to balance protection of personal information and fairness. Also, are there other objectives to ensure the protection of individuals outlined in the act? Could legitimate interest arguments be made for accuracy and safety objectives, further bolstering the desire to collect more data for training and ongoing monitoring?
Additionally, questions about when training technically starts and stops if an AI system continues to learn are not addressed. Same goes for what is acceptable, necessary use for one system when it might be different from another. These were questions originally raised with Article 10 (5) that the new proposed Article 4a does not address.
Redirecting AI literacy. Changing the responsibility of who needs to be AI literate from the providers and deployers of AI systems to regulators and policymakers is swapping one issue for another. As a professional association, the IAPP sees the value that increased literacy through education, training and certification brings to the adoption and oversight of AI systems.
While on one hand, emphasizing that those developing and enforcing these policies should be AI literate is important, shifting responsibility away from those who are designing, developing, managing and using AI systems seems like a step back, especially for a requirement that came with no penalties. There were many calls for greater clarity related to what AI literacy means and each role involved in the AI life cycle. I agree this clarity would be useful. However, it does not seem this change will promote further definition and guidance. At least not for the private sector.
Reporting and post-market monitoring. European parliamentarians argued that only requiring self-assessments for providers of high-risk systems was balanced out by additionally requiring reporting of these systems in an EU database. The Omnibus proposes eliminating this requirement.
This seems to be one of the most controversial proposals and will likely see significant debate. While I personally have never been a huge fan of inventories, I do understand the rationale of balancing the need to report with reducing the burden for third-party evaluation of an assessment. It will be interesting to see if the debate then changes rules for when third-party assurance mechanisms are triggered.
What happens next?
The idea of two Digital Omnibus packages seems to be that the Digital Omnibus for AI will see an expedited review process. From all accounts, it seems unlikely it will pass in advance of the August 2026 enforcement timeline for high-risk systems, as the EU Parliament will not distribute the file before the end of January.
However, as the council and parliamentary review process gets underway, I believe early 2026 will bring a lot of news about which proposals will be accepted as-is and which will see significant revision.
These review processes will continue to reveal more layers of the AI governance onion and our team will continue to report on the updates and outcomes. While some may argue these proposals will be stepping back from aspects of the AI Act, ultimately, the community is engaging with the text, which means we are getting a stronger sense of where clarity is needed and which components of this legislation are suitable guardrails for the responsible adoption of AI.
Other Omnibus articles and overviews:
- "EU Digital Omnibus: What the proposed changes to the concept of personal data mean in practice"
- "The EU's Digital Omnibus Package is out, and carries significant implications for AI"
- "The EU Digital Omnibus with Dr. Gabriela Zanfir-Fortuna"
- "First thoughts on the Digital Omnibus"
- "EU Digital Omnibus on AI: What is In It and What Is Not?"
Ashley Casovan is the managing director for the IAPP AI Governance Center.
This article originally appeared in the AI Governance Dashboard, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.


