Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Beannachtaí ó Bhaile Átha Cliath. Greetings from Dublin.
"Privacy and (artificial intelligence) governance professionals have a critical role to play in the unfolding AI revolution," said Irish Minister for Trade Promotion, AI and Digital Transformation Niamh Smyth, who opened the IAPP AI Governance Global Europe 2025 this week in Dublin, Ireland.
Ireland adopted and updated its national AI strategy last year. Aligning strongly with the EU level approach laid out in its recently published AI Continent Action Plan, Ireland is focusing on three pillars to foster AI development: AI for good; AI innovation ecosystem; and AI-ready citizens. Stakeholder engagement will also be "a big piece" transversally, Smyth said.
She also underlined that it is essential to ensure government has access to independent advice as it continues to develop, implement and monitor progress across its AI plans and ecosystem.
"AI governance is not an obstacle to AI development. We must be cognizant of risks it can pose but a prerequisite to the wide spread of AI is public trust. We need safeguards," she said.
The risk-based approach built into the EU AI Act is a way for Ireland and other European governments to calibrate oversight vis-à-vis harm along the AI supply-chain. Dublin is fully committed to a comprehensive implementation of the AI Act and is considering giving a mandate to a centralizing authority for supervision.
Indeed, approaching AI governance for any organization is still raising many questions. There is a shift away from existential risks to thinking about near term risks, for example related to robotics and job displacement. Representatives of OpenAI, Anthropic and Irish robotics startup Akara spoke to the challenges of where and how to start the governance journey, during their AIGG keynote session.
Several fundamentals emerged:
- Understand what problem AI would solve in an organization. In all the excitement about AI, find what is truly needed to advance the organization's mission.
- AI is a cross-functional project. The teams trusted with its governance must be, too.
- Define a structural governance model that works for the organization. There is no best-in-class model in these early days of AI. How a company will choose to manage that risk depends on its size, sector, risk profile and AI use.
- As many EU digital laws overlap, a holistic approach is required to avoid silos and leverage tools, policies and processes already in place in other parts of the digital responsibility.
- Leveraging AI to manage AI also holds some promise. Tactical tools help engrain a responsible approach within the company in a holistic way.
When it comes to the EU AI Act, the lack of guidance on practical elements and the myriads of authorities involved are among the biggest compliance challenges for many in that space. Conversely, the act's staggered implementation timeline has been very helpful. As there is more clarity around the landscape of regulatory authorities, organizations will have an opportunity to build relationships with their regulators. That can be tremendously helpful to get a sense of their goals, explain their technology and approach, and build a mutual understanding of where everybody is coming from.
Organizations should acknowledge the reality that there will be a lot of ambiguity. In the meantime, creating a mindset of responsible AI use will be important to build up governance across the organization.
Isabelle Roccia, CIPP/E, is the managing director, Europe, for the IAPP.
This article originally appeared in the Europe Data Protection Digest, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.