Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Standards discussions in artificial intelligence circles seem to be on the rise. From my perspective, this emerging popularity is a positive sign AI governance and policy ecosystems are maturing. As AI legislation becomes enacted and general principles are adopted, there is a desire to answer the next set of questions: What does "good" look like? How do we ensure accuracy or mitigate bias? How transparent do we need to be? And to what standard do we aim to meet?

The development of AI standards sounds like an obvious answer to these questions. However, from personal experience, saying we need standards is much easier said than done. What can AI governance professionals do to help advance the development of AI standards?

To do so, it's important to understand the complexity of AI standardization.

Understanding the challenge

Developing standards for AI may sound obvious and simple, but it's actually a complex and intricate web involving tradition, process and sophisticated technical knowledge. This poses the first challenge: developing an accredited standard through existing standards bodies takes a significant amount of time, resources and technical expertise in both the subject matter and the standardization process. Strong diplomacy skills and patience are also an asset.  

Second, even though standards development processes are difficult, there are a lot of existing standards. For example, in two efforts to aggregate AI standards, Canada's AI and Data standards database has identified 420 relevant standards, while the U.K.'s AI Standards Hub has more than 500. To be fair, these include relevant data, privacy and cybersecurity standards that have been in existence for other purposes but are applicable to AI. However, many of these standards have been developed specifically for AI.

Third, navigating what these AI standards are for and how they work together is challenging. For those not familiar with standards development, when you engage, you quickly realize not all standards are equal. They serve different purposes, they follow different development and acceptance processes, and as a result, they have varying levels of adoption and exist for different reasons.

Building on this, some of these standards, such as the most recognizable AI standard, ISO/IEC 42001, set objectives for an organization. While other standards set targets for aspects of the AI system itself. This could be definitional, or it could be a technical target like evaluating how explainable a particular system is. Additionally, like IAPP's Artificial Intelligence Governance Professional, there are standards for the people doing the work.

Fourth, and most important in my opinion, we lack a classification for AI systems. While this may not seem like the biggest issue, having a simple way to talk about subsets of AI use cases would be extremely useful. It would help with the prioritization and development of AI standards and policies, but it would also help AI governance professionals to more easily navigate what is acceptable use.

In the EU AI Act, there is an attempt to do this in Annex III, given that the bulk of the requirements in the act are just for high-risk systems. A common classification that exists as a stand-alone resource independent of the AI Act would allow the broader community to leverage an understanding of AI use cases, determine the risks of those use cases, and work in better coordination to apply existing standards or determine where new standards are needed.

Yes, capturing all the different uses of AI could be a fool's errand. However, in a world where AI is being applied across all industries and regions, this could be difficult because so many technologies are included in a broad definition of AI. As we have seen with the World Health Organization, efforts are being made in various sectors to have a common understanding of AI use in their vertical.

In the absence of a commonly agreed upon classification of AI uses, there is a lot of effort dealing with disparate aspects of the standards without a coherent vision in place. This is like building a house with varying materials without a blueprint of how those materials are going to be applied.

Why this matters

Solving the above challenges is not just an academic pursuit. It's particularly important in a fractured and divergent global policy environment. While there was a time when people questioned whether the EU AI Act would become the gold standard like the EU General Data Protection Regulation, the answer to this question is becoming a more obvious no. As a result, there is a stronger dependency on harmonized and interoperable standards to increase the adoption of AI systems across the world.

While the AI policy discussions may be divergent, and active efforts have been made to scale back AI regulation, the Trump administration changed the AI Safety Office to the Center for AI Standards and Innovation. Canada, Japan, Singapore, the U.K. and many other countries are very active with both domestic and international standards development efforts.

What can AI governance professionals do?

If you are hoping for answers to some of the questions from the start, this is your opportunity to get involved in the standards development process. Whether with an industry association in your sector helping to inform national and international standards, or directly with your national standards body. These processes are always in need of talented individuals.

Alternatively, AI governance professionals can start within their organization by developing an inventory of their AI use cases and mapping which standards apply. In addition to formal standards, there are many research initiatives underway which are helping to inform AI governance best practices all across the use spectrum.

More on this topic:

Ashley Casovan is the managing director for the IAPP AI Governance Center.

This monthly column originally appeared in the AI Governance Dashboard, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.

Editor’s note: For those interested in learning more about the AI standards landscape, join us at the AI Governance Global North America 2025 conference in Boston. On 18 Sept., we will have a panel on AI standards where we will dig into this discussion even further.