Businesses should be ready for uncertainty and prepared to adapt artificial intelligence governance strategies as the regulatory landscape around the EU AI Act is built, according to representatives of leading AI developers speaking at the IAPP AI Governance Global Europe 2025 in Dublin, Ireland.
During an AIGG keynote panel discussion, Anthropic Associate General Counsel Ronan Davy said it will take time for case law, regulatory tactics and industry standards around the EU landmark regulation to develop. That is going to lead to some ambiguity for developers and deployers as the act's staggered timeline rolls out with more obligations.
It will be helpful for practitioners to recognize "there is going to be ambiguity, and that's OK," he said. "Know that the compliance program you build for day one is going to continuously reiterate and evolve."
Davy's comments underscore the challenges facing AI users and creators as elements of the act's enforcement metrics remain uncertain.
The finalization of the EU general-purpose AI Code of Practice, meant to provide voluntary governance rules for companies, has been delayed until this summer. Technology companies are pushing hard to create more favorable conditions under the draft code, which comes with the frustration of civil society and lawmakers involved in the act's creation.
And as the uncertainty grows around the code, the European Commission is also signaling potential targeted changes to the AI Act in the not-so-distant future. According to Politico, Head of Unit for AI Policy Coordination and Development Kilian Gross indicated the European Commission's "first focus" is to "simplify the implementation" of the AI Act for companies, while adding potential provisional changes could be explored "if (simplification) should not be enough."
Davy said he has yet to see one uniform approach for how to compose a compliance team, but more businesses are starting to approach overlapping digital laws from a holistic perspective.
"I think as more and more EU laws come online, we're starting to see increased intersectionality and complexity in terms of how they will be managed in parallel," he said.
Those working in a field with established regulations will also likely find meeting the AI Act compliance challenges easier, said Niamh Donnelly, the co-founder of Irish robotics company Akara. The company's flagship robot disinfects hospitals, and Donnelly said the longstanding history of health care regulation made a strong baseline for her compliance team.
That concept showed up in a privacy-by-design approach to the robot, which uses an infrared camera instead of a more traditional one as a way to protect people’s privacy, Donnelly said.
"So you have a sensor in a robot — maybe look at what information that sensor is taking in and does it need to take in all that information? And can we maybe change it?" she pointed out.
Companies accustomed to meeting data compliance laws, including the EU General Data Protection Regulation are in a better position to meet the AI Act's requirements, according to OpenAI Associate General Counsel, Privacy and Data Protection Emma Redmond.
She said OpenAI wants a better understanding of some components of the regulation, including how different elements of the general-purpose code will work in a practical setting. And improved understanding, she said, hinges on a working relationship with regulators to grasp their interpretation and outline them about what the company is trying to do.
"We haven't really seen anything like this, with so many authorities involved in one area," Redmond added.
Caitlin Andrews is a staff writer for the IAPP.