Announced on 19 Nov. 2025 as one of three parts of the European Commission's Digital Package, which also includes the Data Union Strategy and European Business Wallets, the Digital Omnibus is a proposal to simplify the EU's Digital Rulebook on data, cybersecurity, and artificial intelligence. It seeks to reduce business costs and improve innovation.

As the first attempt at optimizing the EU's "digital acquis," the Omnibus consists of two proposed regulations: a Digital Omnibus that would amend, amongst other legislation, the EU General Data Protection Regulation, ePrivacy Directive, NIS2 Directive and Data Act and a Digital Omnibus on AI that would amend the EU AI Act.

The official publication of the Commission's proposals triggers the formal legislative process, which will include trilogue negotiations with the Commission's co-legislators, the European Parliament and the Council of the European Union. The process will also include opinions from the European Data Protection Board and European Data Protection Supervisor. This legislative process will likely take many months. Until that time, it is worth delving into the motivation behind and substance of the key proposals made by the Commission while closely monitoring debate over the scope of the reforms.

Motivation for the Digital Omnibus

The Digital Omnibus comes at a critical time for Europe, whose "accumulation of rules has sometimes had an adverse effect on competitiveness." It marks the Commission's most significant effort to date to implement recommendations made in the Draghi report, which called for action to spur innovation in Europe, whose "growth model is fading" as "inaction threatens not only [the EU's] competitiveness but [its] sovereignty itself," in the words of Professor Draghi.   

The Commission seeks to pursue a balance between the following aims to:

  • Reduce compliance costs, especially for small and medium-sized enterprises and small mid-cap enterprises.
  • Preserve the same standard for the protection of fundamental rights.
  • Increase competitiveness and innovation within the EU.

The Digital Omnibus builds on prior proposals to extend exemptions for various compliance mitigation measures available to SMEs. These exemptions now apply to SMCs, including the Omnibus IV Simplification Package — published in May 2025 — which exempts SMCs from conducting GDPR records of processing activities.  

The Commission projects its proposals, if adopted as they are, would result in administrative cost savings for businesses over the next several years through reduced duplicative compliance costs and lighter compliance regimes for SMEs and SMCs. Total savings are expected to reach at least six billion euros for businesses and public administrations by the end of 2029.

In remarks following the release of the package, Executive Vice-President Henna Virkkunen said that "simplification does not mean softening our standards," but that Europe needs "immediate steps to get rid of regulatory 'clutter.'" Noting that this moment was "not a reopening of the GDPR," whose "very core remains intact," Justice Commissioner Michael McGrath emphasized that the targeted measures of the Digital Omnibus seek to "further harmonize the application of the GDPR and to simplify certain obligations." 

Importantly, many of the proposed changes seek to legislatively codify, clarify or harmonize interpretations of various digital legislation made by the Court of Justice of the European Union or applicable regulators. Although many reforms in the Digital Omnibus would revise other laws, such as the Data Act, a significant number of the proposals are targeted at the GDPR.

Key proposals of the Digital Omnibus targeting the GDPR

The GDPR entered into force on 25 May 2016 and was fully applicable two years later. At nearly 10 years old, it is arguably one of the EU's oldest, most influential and most interpreted — by the courts, regulators, and practitioners alike — regulations in the EU's digital rulebook. Despite many years of conjecture suggesting that it would not be reopened for reform, the Commission has proposed a number of amendments.

Relative definition of personal data   

One of the key proposed amendments in the Digital Omnibus redefines personal data under GDPR Article 4. Approaching it from a more relative, entity-driven lens, the definition of personal data would be revised to exclude information where the entity holding it does not have "means reasonably likely to be used" to identify the individual.

This proposed revision seeks to reflect the CJEU rulings in the Single Resolution Board  and Breyer cases, the latter concerning the Data Protection Directive, which held that information would not be considered personal data for that entity — and thus would fall outside the GDPR's scope — if the entity "does not have means reasonably likely to be used to identify the natural person to whom the information relates."

The proposed definition means that pseudonymized data could fall outside of the scope of the GDPR in the hands of one entity, even if another entity could identify the individual. 

The proposal would also empower the Commission to adopt additional implementing acts that specify when pseudonymized data constitutes personal data for a given entity based on the state of the art in available technologies and techniques.

The EDPB's planned stakeholder event to review its anonymization and pseudonymization guidelines in light of the CJEU's SRB ruling will likely provide more insight into this redefinition of personal data.

Reduced transparency obligations for data controllers

GDPR Article 13 requires data controllers that directly collect personal data from individuals to disclose certain information about the processing of their personal data. This information is typically disclosed in privacy notices. 

While the current disclosure obligation does not apply when the data subject already has the information, the proposal would broaden this exemption. It would also apply to where "there are reasonable grounds to expect that the data subject already has the information" and where "the processing is not likely to result in a high risk to the data subject," within the meaning of Article 35. 

The proposal introduces further exemptions from the disclosure obligation, including when processing takes place for the purpose of scientific research, when disclosing information to the data subject "proves to be impossible or would involve a disproportionate effort," or when disclosure "is likely to render impossible or seriously impair the achievement of the objectives of that processing." An example is when a controller did not know or anticipate at the time of collection that it would subsequently process personal data for scientific research. In such cases, the controller is still required to indirectly inform data subjects, such as by making the information publicly available.

Expanding legitimate interests to include scientific research and AI model development

The Digital Omnibus includes several amendments to the GDPR that are meant to facilitate research and innovation in the EU. Some specific proposals: 

  • Provide a definition of scientific research that "does not exclude that the research may also aim to further a commercial interest" (Article 4).
  • Specify that further processing for scientific purposes is compatible with the initial purpose of processing (Article 5(1)(b)).
  • Clarify that scientific research constitutes a legitimate interest within the meaning of Article 6(1)(f).

The proposal clarifies the processing of personal data for the development and operation of AI models may be carried out for purposes of legitimate interest under GDPR Article 6. This proposal reflects the opinion issued by the EDPB in December 2024 on the processing of personal data in the context of AI models in light of the GDPR, which affirmed that legitimate interest may serve as a legal basis for the training and deployment of AI models.

However, the proposal clarifies that legitimate interest cannot be used as the basis for the processing of personal data to develop and use AI models where other EU or national laws explicitly require consent. For example, Article 5(2) of the Digital Markets Act requires gatekeepers to obtain end-user consent before engaging in various personal data processing activities. Furthermore, controllers relying on legitimate interests still need to demonstrate necessity and proportionality through a balancing test; they must also implement appropriate safeguards, including minimizing data used for AI training and respecting data subjects' right to object to the processing of their personal data.

The proposal also introduces a new exemption from the GDPR's Article 9 prohibition on processing "special categories of personal data" for the training and deployment of AI in cases where "the controller does not aim to process special categories of personal data, but such data are nevertheless residually processed." Controllers must still implement certain technical measures to prevent the collection and minimize the processing of sensitive data and ensure its removal once it is identified. Notably, however, the proposal leaves room for EU or member state laws to derogate from this in order to require consent as a legal basis for such processing.

Rejecting or charging for data subject access requests

Data subjects' right to request access to their personal data under GDPR Article 15 has been, at times, reportedly abused by data subjects. The proposals include amendments intended to address scenarios in which data subject access requests are weaponized in ways that reflect ulterior motives, such as when they are used as a form of discovery in employment or litigation proceedings. The proposal would empower controllers to either reject such requests or to charge a reasonable fee to process them. The proposal would also lower the burden of proof on controllers to demonstrate that an access request was manifestly unfounded, excessive or "overly broad and undifferentiated."

Clarifying lawful use of automated decision-making

Currently, GDPR Article 22 provides data subjects with the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effect or significantly affects them. While the current safeguards, i.e., the right to human review or protections for processing of special categories of personal data by automated means, would remain unchanged, the proposal aims to provide greater legal certainty to controllers regarding the conditions for when they can lawfully use automated individual decision-making. It clarifies that decisions based solely on automated processing may be taken when they are "necessary for entering into, or performance of, a contract between the data subject and a data controller," when authorized by the EU or member state law, or when based on the data subject's consent. 

When assessing the necessity of processing, it is not required that the decision be made exclusively through automated processing. In other words, a decision that "could also be taken by a human does not prevent the controller from taking the decision by solely automated processing," according to the staff working document accompanying the Digital Omnibus.  

Harmonizing data breach reporting

The Commission also proposes the introduction of a single-entry EU portal for reportable data and security breaches, following a "submit once, share widely" model to streamline obligations under the GDPR, NIS2 Directive, Digital Operational Resilience Act, Critical Entities Resilience Directive and the upcoming Cyber Resilience Act. Overlapping reporting duties for communications service providers under the ePrivacy Directive would be repealed. In addition, the EDPB would be tasked with preparing a standardized breach notification template for the Commission to adopt through an implementing act.

Under GDPR Article 33, personal data breaches must currently be reported to the competent supervisory authority using a different standard than that for communicating a breach of personal data to the data subject under Article 34. The proposal would create a single high-risk threshold for reporting breaches of personal data to both the competent supervisory authority and to data subjects and extend the reporting deadline to 96 hours via a single-entry point after becoming aware of the breach.

Standardizing DPIAs

In line with its commitment in the Helsinki Statement to provide "new tools to help make GDPR application easier," the proposal would task the EDPB with creating EU-wide lists of processing activities that do and do not require a data protection impact assessment, replacing the current member state data protection authority lists. Notably, the provisions under the GDPR allowing DPAs to draw up their own lists would be repealed. 

The EDPB would also be responsible for developing a standardized template and methodology for conducting DPIAs, which the Commission could adopt through an implementing act. The lists, template and methodology would be reviewed and updated to reflect technological developments at least every three years.

Reducing 'cookie consent fatigue'

In one of the most talked about and potentially impactful proposals for everyday users, the Digital Omnibus proposes to modernize the ePrivacy framework's "growing problem of consent fatigue and ineffective cookie banners." In the Commission's staff report accompanying the Digital Omnibus, it estimates that people in the EU spend about 334 million hours on cookie banners per year, resulting in approximately 11.2 billion euros in lost labor costs annually. Estimated costs for businesses to set up cookie banners vary widely, but a 2014 evaluation and impact assessment of the ePrivacy Directive put the costs at around 1,200 euros per website every three years.

The proposal seeks to untangle the interplay of ePrivacy and the GDPR by moving the governance of the processing of personal data on and from terminal equipment solely to the GDPR. Moreover, it proposes to reduce instances where consent needs to be obtained for low-risk processing of such data, such as for the transmitting electronic communications over an electronic communication network, providing services, measuring audiences, creating website and application usage statistics, and maintaining or restoring the security of a provided service.

Personal data obtained for these practices can be processed in reliance on any of the GDPR's legal bases, including legitimate interests. However, controllers must consider factors such as whether data subjects are children, the reasonable expectations of data subjects, the scale of processing, and whether the processing could constitute continuous monitoring of private life. 

Moreover, the Digital Omnibus aims to enhance user control and reduce the prevalence of dark patterns, manipulative interface design that drive up consent rates, by requiring consent to be given or refused via a single-click button whenever consent is used as a legal basis for such processing.

If a data subject declines a request for consent, the proposal would impose a six-month moratorium on making new requests for consent for the same purpose.

Perhaps even more significant are the proposals to introduce universal settings-based preference mechanisms that allow users to express consent or consistently opt out across websites and applications at the browser, operating system or app store level.    

Key proposals in the Digital Omnibus on AI

First proposed in April 2021 and entered into force on 1 Aug. 2024, the AI Act is one of the newest instruments in the EU's regulatory toolkit. That notwithstanding, the Commission has proposed a number of substantive and implementation-based amendments to it.

Processing sensitive data for bias detection  

Under the Digital Omnibus for AI, the Commission proposes extending the existing AI Act  provision that allows high-risk AI systems to process sensitive data for bias detection and correction. This expansion would extend the allowance to other actors, AI systems and AI models.

Shifted AI literacy obligations 

In its current version, the AI Act imposes an obligation on all providers and deployers of AI systems to take measures to ensure their staff attains "a sufficient level of AI literacy." In its proposal, the Commission notes how this approach may have proved to be challenging to implement in practice, especially for smaller companies, notwithstanding its view that AI literacy is a "strategic priority." Accordingly, the Commission proposes to amend this AI literacy obligation by shifting responsibility only to the member states and the Commission who, in turn, should "encourage" providers and deployers to take such measures.

Simplified requirements for some 'high-risk' AI systems

Article 6(3) of the AI Act states that an AI system will not be considered high-risk, even if it falls within the scope of Annex III, if it "does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons." While Article 6(4) of the existing AI Act requires providers of such systems to register themselves and these systems in an EU database of high-risk AI systems, the proposed regulation would eliminate this registration requirement.

Simplified rules for SMEs and SMCs

The Commission proposes expanding some of the benefits granted under the existing AI Act to SMEs to SMCs. This includes simplified technical documentation requirements and special consideration in the application of penalties.  

For clarity, the proposal would also introduce definitions of SMEs and SMCs, which are aligned with previous Commission recommendations, including for the GDPR.

The Commission also proposes extending the simplified compliance option for establishing a quality management system to all SME s and SMCs rather than limiting it to microenterprises as is currently the case under the AI Act. Further, the proposal would only require SMEs and SMCs to implement a quality management system in a manner proportionate to the size of their organization.

Clarified rules on conformity assessments

The existing AI Act imposes different conformity assessment rules for AI systems that fall within the scope of Section A of Annex I to those that fall within the scope of Annex III. The Commission proposal seeks to clarify that, where a single AI system falls within scope of both Annex I.A and Annex III, the provider must follow the conformity assessment rules that apply by virtue of falling within scope of Annex I.A.

Introducing AI regulatory sandboxes

The proposal allows the Commission's AI Office to introduce an EU-level AI regulatory sandbox for certain AI systems and requires member states to strengthen cross-border cooperation on their own sandboxes. As stated in the proposed amendment to Article 57 of the AI Act, the purpose of these AI regulatory sandboxes would be to "provide for a controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market." The providers and competent authority would then agree on a specific "sandbox plan," ensuring appropriate safeguards and, when applicable, incorporating a single document that includes the real-world testing plan.

The Commission would also be allowed to adopt implementing acts detailing arrangements "for the establishment, development, implementation, operation, and supervision of AI regulatory sandboxes."

Real-world testing rules for high-risk AI systems would also be amended, including changes that would extend testing opportunities to high-risk AI systems covered by Annex I, whereas they are currently limited to those listed in Annex III.

Clarified supervision and enforcement system

The proposal includes several provisions to clarify the AI Office's role within the European Commission and to expand its supervision and enforcement powers. In particular, it clarifies that the AI Office would have exclusive competence for the supervision and enforcement of Annex III AI systems that are based on general-purpose AI models where that model and system are developed by the same provider. It would also as have exclusive competence for AI systems that constitute or are integrated into a designated very large online platform or very large online search engine within the meaning of the Digital Services Act. 

The Commission's AI Office also would be required to undertake premarket conformity assessments of any such systems that are classified as high-risk and are subject to third-party conformity assessment pursuant to Article 43. These amendments mark a significant change from the existing AI Act, which arguably gives member state market surveillance authorities shared competence over the supervision and enforcement of such systems.

Delay to rules on high-risk AI

The implementation of rules for high-risk AI systems, originally due to take effect in August 2026, is proposed to be extended by a maximum of 16 months. Entry into force for high-risk obligations will only apply after the Commission confirms that adequate compliance support is available, such as through applicable standards. Once that support is confirmed, obligations will apply after six months for high-risk AI systems categorized under Annex III and after 12 months for those listed under Annex I of the AI Act. The proposal includes backstop compliance dates — applicable irrespective of compliance support — of 2 Dec. 2027 and 2 Aug. 2028, respectively. High-risk systems lawfully on the market before these rules apply can remain available without new certification if no significant design changes occur. 

High-risk AI systems used by public authorities, providers and deployers still have until 2 Aug. 2030 to comply with the requirements. 

Six-month grace period for transparency / marking obligations

Providers of AI systems, including general purpose AI systems, generating synthetic audio, images, videos or text placed on the market before 2 Aug. 2026 will have until 2 Feb. 2027 to implement machine readable detectability or marking for AI outputs.

Centralized supervision for AI systems by the AI Office

The AI Office will become exclusively competent for AI systems based on a general-purpose AI model. It will also become exclusively competent for AI systems that constitute or are part of very large online platforms or very large search engines, as defined under the DSA. The AI Office will have market surveillance powers and coordinate closely with national authorities and Digital Surveillance Act enforcement.

Conformity assessment streamlining

To minimize duplication and speed up the designation process, conformity assessment bodies will be able to submit a single application and undergo a single assessment procedure when seeking designation under both the AI Act and other relevant EU harmonization legislation. 

Increased post market monitoring flexibility

The requirement to follow a harmonized Commission template for postmarket monitoring plans will be removed. Providers will instead maintain a plan in their technical documentation, led by Commission guidance rather than an implementing act. 

Next steps

The Digital Omnibus Package adopted by the Commission remains open to feedback for an eight-week period that is being extended every day until the proposal is made available in all EU languages — currently until 29 Jan. 2026. 

This feedback will then be considered within the subsequent legislative debate as the proposal proceeds through the EU's ordinary legislative procedures in the European Parliament and the Council. The final text will depend on the three institutions' review; further amendments are all but certain to be made during this legislative process.

The Commission has also launched the Digital Fitness Check, a consultation meant to analyze the "interplay between the different rules, their cumulative impact on businesses, and how effectively they support the EU's competitiveness, values and fundamental rights." The consultation is open until 11 March 2026 and may extend discussion of reforms to other digital laws, such as the Digital Services Act and Digital Markets Act.

Müge Fazlioglu, CIPP/E, CIPP/US, is the principal researcher, privacy law and policy and Joe Jones is the director of research and insights, for the IAPP.