Editor's note: The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains.

I was excited to join the AI Action Summit in Paris this week. One event that stands out was an excellent session at the Organisation for Economic Co-operation and Development featuring a discussion among various thought leaders and data protection authorities from the European Data Protection Board, Australia, Estonia, France, Germany, Ireland, Japan, Singapore, Slovenia, South Korea and the U.K.

The event was broken into two sections, the first discussing "Enhancing Access to and Sharing of Data in the Age of AI" and the second considering "Trustworthy and Accountable Data Governance in the AI age: Data Privacy Perspectives."

Collectively, the regulators noted AI adoption is growing exponentially, and AI capabilities are developing at a rapid rate. This presents huge opportunities, particularly in the areas of health, research and productivity gains. It also presents challenges ranging from environmental issues to trustworthy and safe AI challenges and how to ensure regulation is balanced with innovation.

Regulators from South Korea's Personal Information Protection Commission detailed its proactive approach to AI governance. The PIPC established an internal task force for large language models, published guidance documents and implemented a regulatory sandbox system, highlighting a successful case study where AI models were developed to combat phishing schemes using real victim voice data.

France's DPA, the Commission nationale de l'informatique et des libertés, identified key challenges, including data minimization, lawfulness and transparency, particularly in web scraping. Regulators discussed mitigations, including developing AI based on privacy-by-design principles and considering solutions such as privacy enhancing technologies, which play an increasingly important role in data governance.

The CNIL is clarifying legal frameworks and providing practical tools for designing virtuous systems including guidelines, opinions, methodologies for dataset development, webinars and guidance on PETs. It also has a thematic sandbox to support ex-ante engagement.

Ireland's Data Protection Commission challenged the traditional binary debate between innovation and regulation, with regulators stating innovation and respect for the rights of individuals can — and must — co-exist.

The DPC advocates for an "open door" ex-ante approach to engagement while taking enforcement actions where necessary. Recent interventions have included requiring companies to pause technology deployments due to inadequate rights assessments and safeguards — one case last July resulting in the DPC bringing an application to the High Court to halt processing activity.  

Regulators from the DPC said organizations must do their homework, including carrying out robust and properly documented assessments prior to deployment. They referenced a case in which an organization had not done the preparatory work and failed to implement its self-identified measures on transparency and opt outs.

The DPC is actively working with regulators domestically and internationally, for example on the International Accreditation Forum's recent project on legitimate interests assessments, and with European colleagues individually and through the EDPB, as evidenced by a set of questions it raised which led to the EDPB's December 2024 opinion on the processing of personal data in the context of AI models.

DPC regulators said regulatory clarity, consistency and an open door policy in engaging companies, as well as conducting enforcement actions where proportionate and necessary, are all needed to ensure balanced and effective enforcement.

The U.K.'s Information Commissioner's Office outlined three focus areas: online tracking and advertising, children's protection, and AI and biometrics. It has issued guidance clarifying legitimate interest is a lawful basis for training AI models and addressed challenges around purpose specification, necessity and accuracy requirements.

Given the speed of AI developments, including agentic AI, and the potential for unknown future challenges, ICO regulators emphasized the importance of domestic collaboration through the U.K.'s Digital Regulation Cooperation Forum and international collaboration through partnerships and organizations like the OECD.

Regulators from the Office of the Australian Privacy Commissioner stressed that while there is no AI-specific regulation yet in Australia, AI is not exempt from existing legal frameworks, despite its novel nature. Two key pieces of guidance were recently issued in Australia — one on using commercially available AI products, emphasizing accuracy and reasonable expectations, and another on scraping publicly available personal information.

A significant portion of the discussion focused on challenges and successes around international cooperation. Regulators emphasized the need for consistency and interoperability in approaches to AI governance, which will involve multiple regulatory bodies beyond data protection authorities.

In this context, the various forums for collaboration, including the Global Privacy Assembly, regional regulatory networks, the OECD, the Global Privacy Enforcement Network, and others, play a crucial role in fostering collaboration across borders. The European one-stop-shop mechanism was highlighted as a valuable tool for coordinated action, with examples given including the task force on ChatGPT, which has now been extended to cover AI enforcement generally.

The session concluded with acknowledgment that while AI presents significant regulatory challenges, existing legal frameworks provide a foundation for governance. The key lies in applying these frameworks consistently while adapting to technological advances through international cooperation and knowledge sharing, combining practical guidance with enforcement readiness.

Kate Colleary, CIPP/E, CIPM, FIP, is IAPP country leader, Ireland, and director of Pembroke Privacy.

This article originally appeared in the Europe Data Protection Digest, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.