Editor's note: The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains. 

Global privacy practitioners have become accustomed to the application of relatively stable data privacy concepts and standards in recent history, largely anchored by the EU General Data Protection Regulation acting as a global compliance baseline.

But what happens if that baseline is modified — as proposed by the European Commission's November 2025 reforms — and leaves other jurisdictions, which have gone to great lengths to emulate the GDPR, out of step? Does the past decade-long lag of GDPR catch-up around the world now enter another protracted period of patchwork and misalignment?

The European Commission's proposed GDPR reforms on artificial intelligence and pseudonymized data represent an important recognition of the need for a more proportionate and risk-based approach to data protection and AI. But there is now a dual challenge ahead: of both getting clear and meaningful reforms swiftly through the EU legislative process and into law, and in ensuring that the rest of the world moves in-step, cutting the lag time toward a clear and consistent updated global privacy baseline that remains fit for purpose. 

ADVERTISEMENT

Radarfirst- Looking for clarity and confidence in every decision? You found it.

This is contingent on lawmakers and regulators around the world being clear and joined up now about how privacy rules intersect with the AI value chain — recognizing that, while improved legal certainty does not require a full-scale rewriting of data privacy laws, there is a legitimate need for more flexibility and adaptation, and for this to happen in a coherent and structured way. It also presents an opportunity for a genuine, more geographically balanced global baseline to emerge.  

In addition to the European Commission's specific legislative clarifications, there are a series of priority areas where privacy lawmakers and regulators from different jurisdictions can work in partnership with industry to develop a clearer global approach. 

First, privacy regulators should establish a more structured international process to identify and issue clear and consistent guidance confirming how privacy laws are compatible with and promote responsible AI development and use. This could be led by the Organisation for Economic Co-operation and Development or build on initial statements that have emerged from the Global Privacy Assembly. 

One priority could be to advance a standardized legitimate interest assessment framework adapted to the development and deployment of AI systems to increase legal certainty and accelerate deployment across different jurisdictions.

Second, more work is needed to promote global guidance on the real-world application of privacy enhancing technologies as an important enabler of secure and compliant AI systems. Techniques such as differential privacy, data minimization, and secure computation are proven methods that are central to increase trust, risk mitigation and responsible access to data when it comes to AI. Upcoming OECD work in developing a global repository on PETs will be an important step toward developing global policy guidance specific to AI contexts that can be backed by leading regulators.

Third, European Commission proposals to amend the GDPR to introduce a more flexible, context-specific approach to the use of pseudonymized data should be swiftly adopted and replicated at the global level. This would further incentivize AI models and systems to use pseudonymized datasets as a critical tool to enhance data quality and streamline responsible risk-based approaches to data governance more broadly. 

Additionally, further guidance is needed on how AI can be used to improve privacy standards. AI systems can be designed to automate tasks to limit unnecessary exposure of personal data and to help make privacy compliance more effective and scalable. These features can help minimize personal data use while still enabling personalized experiences. Regulators and industry should actively partner to adapt privacy rules and norms to the next wave of cutting-edge AI models.

The Information Technology Industry Council's new paper "Privacy & Generative AI: ITI Global Policy Recommendations" provides further information on the main areas where privacy and AI intersect and a template for how privacy stakeholders can move forward in a more deliberate and unified way.  

Data privacy rules are foundational to the digital economy, but they cannot stay immune to the technological and political shifts currently at play. 

In a world where access to data is already a geostrategic asset, there is active work ahead to seize the opportunity presented by AI and carefully adapt global approaches so privacy rules remain in-step and effective. 

Calibrated modifications to the global baseline are key, but the next phase of global privacy policymaking must be coherent by design, requiring a responsive and synchronized effort across the global privacy stakeholder community. 

Robert McGruer, CIPP/US, is a senior director of policy at the Information Technology Industry Council.