Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Artificial intelligence is moving faster than governance models can adapt. Organizations that aren't proactively thinking about dedicated AI governance now are at risk of being left behind. 

To avoid this, it is necessary to reconsider how organizations approach AI risk management and oversight. Today, many organizations are still in the early and mid-stages of their AI adoption journey and are managing AI risks by augmenting existing risk management domains, such as data privacy, information security and compliance. However, with greater AI adoption and emerging advancements such as agentic AI, the scale and complexity of risks will increase, challenging existing governance models and structural adequacy assumptions. 

A central and dedicated AI governance function is the most sustainable and effective solution for managing AI at-scale. Once organizations reach a certain threshold of AI adoption, it is critical to shift toward a model that includes an AI governance function that comprises dedicated resources. This will be a key milestone to effectively manage the increasing risks, both known and unknown, and operational challenges that come with being an AI-forward organization.

ADVERTISEMENT

Radarfirst- Looking for clarity and confidence in every decision? You found it.

Unique governance considerations

AI introduces unique governance considerations that cannot be effectively managed at-scale by siloed risk management domains.

To start, the unit of oversight for AI governance is fundamentally different from other risk domains. For example, privacy and information security typically oversee data flows and software applications, respectively. In AI governance, however, oversight is typically performed at the AI use case level. AI use cases are comprehensive by definition, and include business and operational context, as well as underlying technical components such as models, algorithms, features and weights, methods of model training or tuning, datasets and architecture.

AI introduces an entirely new class of emerging AI-specific risks that do not fit squarely into existing risk domains. Novel and unique risks such as harmful bias, explainability and hallucinations fall outside the typical purview of conventional privacy and information security teams. These are risks that are intrinsic to AI systems and will become unprecedented in their complexity, scale and potential for widespread impact with the emergence of multi-agent systems and other new frontier systems. 

AI also heightens risks for existing domains, such as compliance, privacy, information security and third-party risk management. For example, AI-focused regulations such as the EU AI Act, and U.S. state laws including the Colorado AI Act and Texas Responsible AI Governance Act, are increasing the compliance risk associated with the development and use of AI. AI rules often demand deeper specificity, such as mapping regulatory requirements to technical controls, documentation, and evidence unique to AI systems, which traditional risk management processes are not designed to handle. 

Existing risk teams that are privacy, security or compliance focused will be hard-pressed to expand their oversight to address AI-specific risks that do not neatly fit into their domains. Effectively coordinating governance efforts across risk domains will be a difficult, yet critical task that will also require significant attention. To effectively tackle these gaps, organizations should consider a different approach.

A new operating model

For organizations with significant strategic AI ambitions, a dedicated AI governance function is the most sustainable solution over the long haul. There is no one-size-fits-all path for this, but a progression that aligns with an organization's increasing AI adoption and maturity. This function requires a blend of legal, technical and ethical skills to navigate both the complexities of AI and the rapidly evolving regulatory landscape, from the EU AI Act's regional approach to fragmented U.S. state laws.

We can think of this as a three-stage progression to ultimately mature into dedicated AI governance. The right stage for each organization depends on the scale, number and risk of its AI use cases across the enterprise. The ultimate goal is to establish a central AI governance team comprising "AI risk specialists" and a "quarterback" who can view AI risks from a high-level strategic perspective. This team would be responsible for designing and enforcing mitigation strategies, pulling together insights from various stakeholders, and performing a holistic review of all AI use cases to ensure safe, responsible and compliant AI is at the core of the organization.

The path forward

While many organizations may not be ready to adopt this model yet, we anticipate it will become a foundational requirement for AI-forward organizations. Across different industries, organizations are realizing that AI will require governance not simply from a distributed responsibility, but from a dedicated function that is equipped to tackle the most impactful risk management challenge of our time.

When organizations come to this realization, one simple yet difficult-to-answer question will arise — "where do I go from here?"

Organizations do not simply "flip a switch" to a standalone AI governance function, but progress through a journey, often driven by the increasing scale and complexity of their AI footprint. A three-stage maturity model describes this evolution, providing a roadmap for organizations to follow.

Stage 1: Ad hoc governance and augmentation

In this phase, the organization is likely still establishing the foundational components of AI use and oversight, such as the AI strategy, inventory, risk appetite, policies and standards. 

The operating model likely reflects this early stage of maturity, and existing functions such as security, privacy and legal are tasked with augmenting their existing responsibilities to include AI risk management. This mode of operation will work for a while but ultimately will come under pressure as AI-specific risks and use begins to scale.

Stage 2: Collaborative AI governance

As AI adoption scales across organizational departments, a more structured and well-orchestrated approach to AI oversight will be needed. Often, organizations create AI working groups that better enable collaboration to oversee AI use cases. The process for reviewing, accepting and monitoring AI use cases can also see some improvement via standardization. 

However, without a dedicated process owner, AI governance is still performed as a part-time role by stakeholders, and there is still a gap for who will continuously improve and automate the AI governance process as AI's use and capabilities evolve. 

Stage 3: Dedicated AI governance

This should be the target future-state for AI-forward organizations. At this stage, organizations' commitment to AI as a core business driver is reflected in their equal commitment to AI governance. 

There is a central AI governance team or role, depending on the organization's size, that is mandated to design and enforce responsible AI throughout the enterprise. Operationalizing governance at-scale requires the ability to have features like enhanced automation, audit-ready workflows, and seamless continuous monitoring — those that a best-of-breed solution can employ to enhance the effectiveness of the AI governance team, and scalability and automation of the process. 

Final thoughts

To prepare for future challenges, organizations should proactively consider appropriate solutions for tomorrow. As AI becomes more ubiquitous with day-to-day business activities and emerging capabilities are deployed, such as multi-agentic workflows, the need for effective AI oversight will only escalate. 

Establishing a dedicated AI governance function will enable sustainable oversight of AI that can both scale with increasing use and adapt to the latest technological advancements.

May Sethaphanich, AIGP, CIPP/A, is senior counsel, global AI/AI governance and privacy at McDonald's Corporation. 

Anthony SchianodiCola is AI governance advisor at Credo AI.