The case for treating AI governance as a standalone imperative


Contributors:
May Sethaphanich
AIGP, CIPP/A
Global Senior Counsel, AI Governance
McDonald's
Anthony SchianodiCola
AI governance advisor
Credo AI
Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Artificial intelligence is moving faster than governance models can adapt. Organizations that aren't proactively thinking about dedicated AI governance now are at risk of being left behind.
To avoid this, it is necessary to reconsider how organizations approach AI risk management and oversight. Today, many organizations are still in the early and mid-stages of their AI adoption journey and are managing AI risks by augmenting existing risk management domains, such as data privacy, information security and compliance. However, with greater AI adoption and emerging advancements such as agentic AI, the scale and complexity of risks will increase, challenging existing governance models and structural adequacy assumptions.
A central and dedicated AI governance function is the most sustainable and effective solution for managing AI at-scale. Once organizations reach a certain threshold of AI adoption, it is critical to shift toward a model that includes an AI governance function that comprises dedicated resources. This will be a key milestone to effectively manage the increasing risks, both known and unknown, and operational challenges that come with being an AI-forward organization.
Unique governance considerations
AI introduces unique governance considerations that cannot be effectively managed at-scale by siloed risk management domains.
Contributors:
May Sethaphanich
AIGP, CIPP/A
Global Senior Counsel, AI Governance
McDonald's
Anthony SchianodiCola
AI governance advisor
Credo AI