The interplay between AI standards, regulations

The AI Standards Hub Global Summit covered the ups and downs of relying on industry standards to streamline AI regulatory compliance.

Contributors:
Lexie White
Staff Writer
IAPP
Global jurisdictions are increasingly open to considering policies to ensure artificial intelligence systems are used and developed responsibly while balancing safety and innovation. But the race to the top of the global AI market is outpacing regulation, leaving companies open to risks with streamlined implementation plans.
Stakeholders at the AI Standards Hub Global Summit 2026 noted that while regulatory frameworks such as the EU AI Act continue to develop, organizations may be focusing their efforts on technical standards, assurance systems and other tools that can set compliance practices on the right course.
Organization for Economic Co-operation and Development, Division on AI and Emerging Digital Technologies Head Sara Rendtorff-Smith, noted organizational and industry standards are "essential to governing AI well" and act as the "quiet infrastructure of innovation, as they enable us to scale AI safely and responsibly across our economies and societies."
The role standards play and how they are shaped
Rendtorff-Smith highlighted the importance of the OECD's industry standards, noting the "ability to govern effectively and around the world" must grow as rapidly as AI technologies are moving. She added international cooperation continues to be the linchpin for balancing sector-specific standards and regulations.
However, standards are perceived differently depending on the jurisdiction. Some EU organizations are currently arguing they cannot effectively comply with the AI Act without the industry standards they were promised before implementation deadlines. On the other hand, U.S. organizations are relying on the National Institute of Standards and Technology's suite of AI standards, including the AI Risk Management Framework and the AI Agent Standards Initiative, to pave the way to responsible AI with a patchwork of state laws and no cross-sectoral federal law.
Tailoring standards to common practices and policies is also an emerging priority. The OECD is keeping stakeholders from the public and private sectors apprised through its AI Policy Observatory, which tracks more than 2,000 AI policies across over 80 jurisdictions. The IAPP does similar tracking through its Global AI Law and Policy Tracker.
The variance across global proposals, in addition to the ever-evolving nature of AI, is putting new pressures on standards development.
"AI is testing the system like nothing else ever has. We shouldn't be blind to that. We shouldn't be afraid to admit it. AI is testing regulators. It's testing society. It's testing academia. It's testing industry,” British Standards Institution Standards Policy Director David Bell said. "AI is changing the way we work in so many fundamental ways. … Standards build the world, but the way we work going forward has got to change."
Another pillar to standards building is collaborative enforcement, which serves as a reference for best practices. Rendtorff-Smith warned insufficient global enforcement and "a very fragmented landscape" puts at risk the foundation that standards require for strength and adoptability.
"(The fragmented landscape) will be marked not just by significant compliance cost to businesses, but also by barriers to cross-border deployment as well as stifled innovation, ultimately," she said. "And so we need principles, we need a common definition, we need to align frameworks, and we need evidence-based standards."
London School of Economic and Political Science Data Science Institute Distinguished Policy Fellow Florian Ostmann also highlighted the relationship between technical standards and enforcement efforts when supporting responsible AI safeguards.
"Standards play an important role in facilitating the implementation and compliance with regulation. It's also clear that standards have an important role to play as a complementary tool to regulation (such as) performing functions that regulation can't perform," Ostmann said.
Next steps
Stakeholders argue enforcement efforts should focus on strengthening coordination across regulatory and technical tools while addressing potential gaps in AI implementation.
OECD AI Senior Economist Luis Aranda noted organizations should be looking to measure their compliance and data protection safeguards. He claimed while standards and regulations define expectations for trustworthy AI, consistent methods for assessing system performance remain limited.
"I think that's what we need to be thinking of today," Aranda said. "Standards are great. They tell you what good looks like. But they don't tell you how to measure good."
He also emphasized the need for more inclusive and globally representative governance approaches, noting those will require "shared foundations, shared concepts, and shared definitions." Standing in the way are concerns about addressing specific AI regulations that could hinder innovation.
"We all know there's a global AI race, and no country, of course, wants to be the first runner-up while everyone else is sprinting ahead," Aranda added. "We see all those factors contributing to this timing concern when it comes to regulation, and this is probably why we're also seeing a second wave of national AI initiatives."

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Lexie White
Staff Writer
IAPP
Tags:


