Responsible AI governance shouldn't start — much less end — at legal compliance


Contributors:
Brenda Leong
AIGP, CIPP/US
Director of the AI Division
ZwillGen
Jey Kumarasamy
Associate
BNH.AI
Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Clients have recently started asking a version of the same question: "If our system isn't 'high risk' under the EU Artificial Intelligence Act or the Colorado AI Act, why do I need to do anything about it?"
The assumption is that risk is whatever statutes say it is — and only that. But that framing is narrow, unsafe for consumers and bad for business.
Long, long ago, at least in modern machine learning years, companies were starting to implement increasingly complex predictive AI models into their business operations and services. Companies struggled to understand these systems — how to assess the risks involved and what to do to manage them in ways that would allow them to benefit from their business value while also preventing harm to customers, public embarrassment or legal liability.
There was no clear path, no AI-specific regulation or law, and not even any real best practices to follow.
In that uncertainty, many lawyers and data scientists, as well as academics and other professionals, waded in to define what "responsible AI governance" might look like. Some of them ended up working on a grant with the U.S. National Institute of Standards and Technology — including the founders of the law firm that is now the AI Division of ZwillGen. Thus, the NIST AI Risk Management Framework was born.
Contributors:
Brenda Leong
AIGP, CIPP/US
Director of the AI Division
ZwillGen
Jey Kumarasamy
Associate
BNH.AI