Editor's note: The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains.
When the EU Artificial Intelligence Act was adopted, its provisions on high-risk systems were to become applicable 2 Aug. 2026. That deadline was clear. Everyone began working toward that deadline, keeping an anxious eye on the standards development process that was arguably slower than most would have liked.
Today, that timeline is very clunky and uncertain, with no legal certainty provided for developers, customers deploying their systems or regulators tasked with enforcing these obligations when the time comes.
The deadline is the subject of one of the most pivotal amendments put forward by the European Commission in its AI Omnibus draft regulation, presented 19 Nov. 2025.
The Commission proposes that high-risk obligations would only apply after it confirms adequate compliance support is available, such as harmonized standards and guidelines, therefore extending to December 2027. Obligations would then apply six months after the Commission's decision for high-risk AI systems under Annex III and 12 months after for those listed under Annex I of the AI Act.
Negotiations on the AI Omnibus are starting in Brussels and will determine whether the high-risk provisions currently planned to apply 2 Aug. will indeed be delayed.
An illustrative timeline shared by European Parliament officials suggests companies may not have clarity on that point until May at the earliest — putting a finer point on the state of confusion in the marketplace.
According to this timeline, the optimistic scenario is to have an agreed and applicable AI Omnibus by late July, but a lot hangs on the negotiation between the European Council and Parliament, and the latter offers little predictability at the moment.
Interestingly, both the 23 Jan. draft compromise text of the council of member states and the 5 Feb. draft report of the two lead committees in European Parliament — the Committee on Civil Liberties, Justice and Home Affairs and the Committee on the Internal Market and Consumer Protection — suggest setting fixed deadlines for the application of high-risk rules: 2 Dec. 2027 for Annex III systems and 2 Aug. 2028 for Annex I systems.
A simplistic two-against-one rule would suggest this may be the winning option during trilogue negotiations, with emphasis on "may." Many other stakeholder groups still have to weigh in. But more importantly, this approach would put a firm target date before the standardization process; opinions vary on whether this would be helpful to reach an agreement.
As things stand, the Schrodinger window to operationalization and compliance is both quite broad and incredibly narrow, and we won't know for sure which scenario will prevail until late spring.
The AI Omnibus proposes several other changes that could have transformational effect. The Commission is proposing to remove the registration obligation for AI systems providers that do not fall into the high-risk category. This amendment has been largely welcomed by industry but heavily criticized by civil society and some legislators for it could become an enforcement loophole.
Another change giving pause to many observers is the proposed softening of AI literacy obligations for companies in scope. The positions of the European Parliament, European Data Protection Board and European Data Protection Supervisor are pushing back. Many organizations also continue to invest in AI literacy despite the potential loosening of AI Act obligations because it is an integral part of efficient internal AI governance.
The pace of negotiation is less critical for these provisions than it is for high-risk AI requirements but it forces companies to weigh their options and make choices regardless.
Isabelle Roccia, CIPP/E, is the managing director, Europe, for the IAPP.

