OPINION

The AI governance conversation is happening in the wrong room

The challenge around AI governance isn't selecting the right framework — it's operationalization.

Published
Subscribe to IAPP Newsletters

Contributors:

Kristin Butler

Deputy Chief Information Security Officer

Personify Health

Editor's note

The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains

Every governance conversation I walk into starts the same way: Which framework should we adopt?

U.S. National Institute of Standards and Technology Artificial Intelligence Risk Management Framework. ISO 42001. The EU AI Act. NIST Cybersecurity Framework 2.0 with AI overlays. The list grows monthly.

And while those conversations are happening in conference rooms and compliance committees, the actual problem is compounding in production environments.

The real challenge isn't framework selection — it's operationalization.

Regulation is doing what it always does

Regulatory bodies are moving slower than the technology. That's not criticism; it's a structural reality. Frameworks describe what to govern. They are considerably less helpful on how to govern in an environment changing faster than the last risk assessment.

I've spent 25 years watching technology outpace regulatory guidance. Across that time, the pattern is consistent: Organizations that wait for regulatory clarity before building governance capability spend the next three years catching up. Organizations that operationalize ahead of the mandate spend those same three years building competitive advantage.

AI is no different. Yet the gap between deployment velocity and governance maturity has never been wider.

The problems the industry isn't solving

Three operational realities are being under addressed in most governance conversations.

First: Organizations cannot govern what they cannot see. Monitoring, tracking and inventorying AI across an enterprise is genuinely difficult. Shadow AI adoption — employees and teams deploying tools outside formal procurement — creates significantly more AI exposure than most organizations' governance programs account for.

Second: "Human in the loop" has become a design claim more than a control. The presence of human oversight in an AI workflow is only meaningful when that oversight is structured, documented and actually capable of catching what it's supposed to catch.

Third: Friction is not governance. There is a meaningful difference between a governance program that slows AI deployment and one that manages AI risk. The organizations that will lead in this space aren't building the most comprehensive AI policies. They're building the operational infrastructure to capture risk in motion — as AI gets deployed, models get updated, and use cases expand beyond their original scope.

What operationalized AI governance actually looks like

In practice, the shift from compliance-oriented to risk-oriented AI governance requires three things.

It requires an intake and inventory model that captures AI deployment at the point of adoption — not in an annual review. Every new AI tool, every third-party model integration and every internally developed capability needs a structured intake process with risk evaluation criteria appropriate to the use case and the data involved.

It requires risk thresholds defined in business terms, not technical severity ratings — business exposure. What does a failure in this AI system cost? What's the regulatory surface? What's the customer impact? When risk is expressed in language executives and boards use to make decisions, governance stops being a security function and becomes a business function.

It requires ongoing monitoring designed for model behavior, not just system availability. AI systems drift. Models trained on historical data encounter conditions their training didn't anticipate. Use cases expand. What was low-risk at deployment may not be low-risk six months later.

The shift worth watching

Something is changing in how serious organizations are approaching this. The compliance conversation — whether the right boxes being checked — is giving way to a risk conversation: what the organization is actually exposed to and how to manage it in motion.

I've spent 25 years watching technology outpace regulatory frameworks. I've never seen the landscape shift this fast or this broadly.

The question worth asking isn't which framework to adopt; it's how to make risk management as fast as deployment.

One thing that does feel different right now: The industry is finally moving from a compliance conversation to a risk conversation. That shift is long overdue — and genuinely refreshing.

The views expressed in this article belong solely to the author. 

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Kristin Butler

Deputy Chief Information Security Officer

Personify Health

Tags:

AI and machine learningFrameworks and standardsProgram managementStrategy and governanceRisk managementAI governance

Related Stories