At the IAPP AI Governance Global North America 2025, stakeholders reflected on the progress to date on building AI governance frameworks and opined on how the global governance conversation may evolve in the near and long term. The general shift in attention to agentic AI, model training and model inferencing, and energy consumption and allocation are among the topics pushing the larger AI governance conversation forward, despite general uncertainty with respect to evolving global regulatory frameworks.
The greatest immediate threat to the still-emerging global AI governance conversation, according to Bird & Bird Partner Miriam Ballhausen, may be the establishment of diverging global legal frameworks regulating the technology, whether jurisdictions pursue a comprehensive AI law, or plan on relying on existing sectoral regulations.
"How do I as an organization actually manage (complying with a variety of global regulations) in as consolidated a way as possible," said Ballhausen, who presented during an AIGG breakout session focusing on the current state of global AI policy. "If you're looking at the U.S., the EU, potentially China or Asia more broadly, there are still other parts of the world where things are happening as well; understanding the different legal requirements is a big challenge that organizations are facing right now."
The majority of AI governance regulations is still emanating from the EU, which may be viewed in some policy circles as over-burdensome for AI developers. Ballhausen said comparing the EU's approach with the "pro-growth, pro-innovation" approach the U.S. is taking, may result in more long-term certainty for EU-based AI development, whereas the U.S. could be forced to be more reactive to potential harms caused by AI down the road.
"We're hearing from clients in the EU that at least they have guidance, and they know what they need to work toward," Ballhausen said. "If there's a lack of regulation, maybe innovation happens more quickly, but at the same time you don't know what will happen down the road, so maybe early regulation is also helpful."
Cooperation through international bodies
As the jurisdictions leading the global AI governance race mull the implementation of divergent or precedent-setting legal frameworks, one potential off-ramp toward interoperability may lie in the fact that smaller countries around the world are looking to cooperate with one another with respect to integrating their economies with AI.
Infocomm Media Development Authority of Singapore Director for Ecosystem Development and Engagement Jamin Tan, who said his country is a key leader among the Digital Forum of Small States where participants "share notes on our regulatory challenges." He indicated the key to global cooperation on building resilient international AI frameworks involves participating in "multilateral setups at a government level," such as working through the Organisation for Cooperation and Economic Development and the International Organization for Standardization.
Tan said Singapore currently has no designs on enacting a "horizontal or omnibus" AI regulation and is instead pursuing a "practical assessment of the harms and risks that arise from different aspects of AI systems," which is synthesized to understand how different models will impact existing sectoral law.
"If it's challenging for large countries to figure out how to get a handle on the global footprint of AI development and deployment, it's even more challenging for small countries to do that and to assert enforcement powers in-practice," he said. "We need to find ways to work with others on problems, rather than assume we can assert a law that is going to encompass every stage of the AI life cycle."
Signs of AI policy consistency
While the U.S.'s current pro-innovation track for AI makes it somewhat of an outlier among other jurisdictions, policy work under President Donald Trump and his predecessor, Joe Biden, have provided significant steps to shape the global conversation.
Carnegie Mellon Institute for Strategy and Technology Director of Studies Henry Krejsa said that while the rhetoric coming from the Trump White House surrounding AI innovation and governance may depart from the Biden administration, there may ultimately be a lot more overlap between the two administrations in the execution of AI policy.
"There's definitely a lot of rhetorical change, but more policy continuity than you might expect if you were just looking at the rhetorical change," said Krejsa, a former Department of Defense and White House official under Trump and Biden, respectively. "Every administration has limited bandwidth on how it applies its worldview and you don't necessarily want to throw the baby out with the bath water and start over every time the administration changes."
Administrative lag for global enforcement efforts likely
Regardless of jurisdiction, Bird & Bird's Ballhausen said the relentless pace of technological innovation will put a pause on major enforcement efforts by regulators because the technology is so novel and the depth and breadth of potential harms may be unknown once the technology is adopted on a widespread scale throughout the global economy.
Such a scenario is already coming to pass in the EU and the U.S., with discussions taking place for a potential enforcement pause on parts of the EU AI Act and the Colorado AI Act's effective date delayed by months to June 2026.
Ballhausen compared a potential delay in enforcement actions to the administrative lag EU data protection authorities encountered when they began enforcing the EU General Data Protection Regulation.
"The sheer mass of AI setups that are available and that differ across the board will at least delay any kind of enforcement that we’re seeing; so it's not so much do the rules work or not work, but will there actually be any enforcement?" Ballhausen said. "Legally, the landscape is likely to become more fragmented. There are going to be more laws in more countries regulating AI to some degree or another, but if there is some collaboration on the part of enforcers, you may see they become interpreted more closely to one another and the fragmentation can disappear."
A change in energy allocation?
The looming potential of a fragmented global AI regulatory regime comes as technology companies are in the process of rethinking how to most efficiently allocate energy resources in order to make the most accurate AI models.
Carnegie Mellon's Krejsa said the energy consumption issue surrounding AI has changed so rapidly in the last two years that preexisting assumptions from AI developers in terms of the vast energy demands for training models, and how that energy ultimately gets allocated, need to be recalibrated.
At the start of the recent AI boom, according to Krejsa, Big Tech companies primarily concerned themselves with ensuring they secured a sufficient supply of energy to power their data centers for training of larger generative models. Over the last year, he said AI developers have learned that model inferencing is turning into the more energy- and cost-intensive process, compared to training.
"We discovered that inference, the process of applying pattern recognition to new and novel prompts, is indeed very cost intensive, but if you do more of it — if you re-balance the ratio of energy and time spent on applying those lessons learned to that (specific) model query — you get better results," Krejsa said. "That's one of biggest assumptions that people have needed to update over the last year or so, is that if you empower these tools to think longer, and give them time to check their own work, and make sure they didn’t get off track, you get much more accurate results. And that is bringing with it a lot of knock-on effects about where energy and infrastructure should go."
Alex LaCasse is a staff writer at the IAPP.