Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Artificial intelligence systems have rapidly transitioned from the lab to core business operations — bringing their associated risks along with them.

Not long ago, AI governance conversations centered on checklists for ethics, bias and model transparency. In 2025, governance teams find themselves confronted with real-time incidents that resemble those of cybersecurity or crisis management.

From AI models unpredictably drifting off course to employees inadvertently leaking data into chatbots, and even deepfakes duping staff — these risks are no longer hypothetical. They're happening, often in highly regulated environments where they were least expected.

The new wave of AI incidents

Consider a few examples that have prompted executives to rethink AI oversight. Early this year, an employee at a financial firm was tricked by a compelling deepfake video call impersonating senior management; the victim of the scam ended up wiring USD25 million to criminals before anyone realized the "boss" on the call wasn't real.

Around the same time, a leading tech company discovered its engineers unknowingly uploaded sensitive source code to an online AI chatbot, which prompted an immediate ban on employees using generative AI tools until proper safeguards were in place.

In a widely publicized prank, customers manipulated a car dealership's ChatGPT-powered assistant into "agreeing" to sell a more than USD60,000 SUV for USD1, exposing how easily a clever prompt can bypass an AI's intended rules. While no actual Chevy was sold for USD1, the incident forced the dealership — and its software vendor — to shut down the bot and acknowledge the need for stricter controls on AI interactions.

Meanwhile, organizations are realizing their AI models can create subtler fiascos on their own. When a large bank's fraud detection model gradually drifted out of sync with changing customer behavior, it began flagging thousands of legitimate transactions as fraud. The model hadn't been attacked or explicitly reprogrammed — it simply became miscalibrated over time — a phenomenon known as model drift. This silent erosion of accuracy disrupted customers and required an urgent, costly model retraining.

And then there's the bizarre case of generative AI systems hallucinating personal data. In one instance, an AI chatbot falsely accused a named law professor of misconduct, citing a nonexistent news article. The professor was shocked to find his reputation implicated by an AI's imagination. Such hallucinated outputs highlight how these models can confidently produce fictitious yet damaging information, including what appears to be private or sensitive data.

Each of these scenarios — data leaks, malicious prompt exploits, deepfake-driven fraud, model drift and AI hallucinations — reveal a common theme. When AI systems behave in unplanned or unsafe ways, the consequences can be similar to a security breach or operational crisis. Yet, historically, AI governance teams have not been equipped to respond in the same way a cybersecurity incident response team would. It's time for that to change.

From bias checklists to incident response playbooks

These days, AI governance professionals can't just write up a set of rules and walk away. They've got to stay plugged in — almost like an incident response team would. It's not just about launching a fair and compliant model and calling it a day. Things change. Something will break or behave unpredictably; when it does, it needs to be caught early.

That's why it's essential to build safeguards that go beyond the basics. Risks like model drift, weird or adversarial inputs, sensitive data slipping through, or even AI-generated junk content don't appear on a schedule, so the systems monitoring them can't be static. Governance must be as flexible and responsive as the AI it manages.

How can organizations do this? One key is to borrow proven tactics from cybersecurity and information technology risk management and apply them to AI.

Red team and stress test your AI. Before deploying AI models — especially generative ones — in the wild, conduct adversarial testing to identify and address their weaknesses. Security teams have long used penetration testing on software. Now we need analogous "prompt penetration tests" for chatbots and rigorous challenge scenarios for decision models.

For example, some companies are instituting AI red teams that try to "jailbreak" their models or induce harmful outputs in a controlled setting. This type of testing, encouraged by recent U.S. executive guidance on AI, can reveal how a model might be tricked into revealing sensitive information or violating policies, allowing for fixes to be implemented first.

Implement strong access and data controls. AI models should be treated as high-value systems and have proper access restrictions and robust data safeguards in place. Restrict user access and input permissions within enterprise AI tools; prevent sensitive data from being used in AI training or prompts unless necessary.

Following the incident in which Samsung's engineers unintentionally leaked secrets to ChatGPT, the company not only temporarily banned external AI use but also began developing internal AI solutions to prevent employees from copying data into public systems.

Financial institutions have taken similar precautions. JPMorgan Chase, for instance, restricted employee use of ChatGPT out of concern that staff might feed it confidential information or rely on its unvetted answers. Many banks are now routing AI experiments through approved sandboxes or vendor platforms that enable oversight of what data is being input and output.

Continuous monitoring for model performance and misuse. Just as cybersecurity teams deploy intrusion detection systems, AI teams need to deploy AI drift and abuse detection. This involves monitoring a model's outputs and decision patterns over time, enabling the identification of behavioral shifts or anomalies.

For a predictive model, that might mean tracking if error rates are creeping up or if outputs start differing from an independent "shadow" model running on fresh data. For a generative AI, this means logging user queries and responses to flag potential issues, such as attempted prompt injections or inappropriate answers.

Leading practice is to establish automated alerts — for example, if a large language model suddenly produces a burst of refusals or toxic content, or conversely, if it's agreeing to obviously problematic requests. Early detection offers the chance for intervention — retraining the model, adjusting filters or even temporarily pulling it from production — before a minor glitch becomes a headline-making incident.

Prepare an AI incident response plan. When preventative controls fail, a playbook for damage control is needed. Who is alerted if an AI system is compromised or starts disclosing sensitive data? How will the issue be addressed? Potential responses may include disabling specific model features, rolling back to a previous version or severing external integrations, depending on the nature and severity of the issue. And importantly, how can the incident be communicated to management, users, regulators or those affected?

Many organizations don't have answers until after an AI mishap occurs. A better approach is to proactively map out response steps, much like companies do when responding to data breaches. This might include having a cross-functional AI incident response team — involving IT, privacy, legal, public relations and business unit leaders — that can be quickly mobilized.

Rehearse various scenarios. It could be a training data leak, a public relations fiasco from a biased AI output, or a deep-fake campaign. Drills and tabletop exercises will help prepare teams to handle the real thing. As the SANS Institute's 2025 Draft Critical AI Security Guidelines note, developing an AI incident response plan is now an essential component of AI governance.

Adaptive governance aligned with emerging rules

Taking this more dynamic, security-minded approach to AI governance not only reduces surprises but also positions organizations to meet new regulatory expectations. Around the world, policymakers are recognizing these AI risks and demanding greater accountability.

The U.S. government's recent AI executive order, and related guidance, calls for rigorous safety testing, transparency and oversight of advanced AI models. The U.S. National Institute of Standards and Technology's AI Risk Management Framework, a voluntary standard gaining global traction, emphasizes continuous monitoring and risk mitigation throughout an AI system's lifecycle, from design to deployment to retirement. It urges organizations to "map" risks, "measure" and control them, and "manage" AI with ongoing improvement cycles — essentially the practices described above.

In Europe, the AI Act is set to raise the bar on risk management requirements. For high-risk AI systems, such as those used in health care, finance or public services, compliance will mean much more than simply checking a list at launch. These systems will require thorough risk assessments, regular incident logging, and ongoing monitoring, even after they're deployed in the field.

The idea is to identify problems that may not be apparent at first. Providers will need to incorporate safety measures to protect against harmful outcomes and, in the case of specific generative AI tools, ensure transparency from the outset. For instance, if an image or audio clip could be mistaken for something real, it will need a clear label indicating that AI created it.

Moreover, if something goes seriously wrong or nearly does, organizations may be required to notify regulators. This isn't just a trend; it's a signal that AI governance is shifting from a one-time task to an ongoing, active responsibility. Keeping systems safe and trustworthy won't be optional — it will be built into how they're managed day-to-day.

The good news is that sectors with mature risk practices are adapting those practices to AI. Financial services firms, for instance, have long-managed model risk under strict oversight — such as the U.S. Federal Reserve's guidelines for banking models. Now they are extending those frameworks to cover AI-specific challenges. Banks are expanding their model validation programs to include checks for bias, robustness to manipulation and data protection safeguards in AI models. Some are establishing AI governance committees that bring together compliance, IT security and business leaders to review new AI use cases and ensure proper controls are in place before launch.

This kind of cross-functional governance is key. AI risk doesn't fit neatly into one box, so responses shouldn't either. Privacy officers, security architects, data scientists and business owners must collaborate to establish policies and respond to incidents.

Practical steps forward

As AI systems become more autonomous and capable, the associated risks also become more autonomous, meaning they can emerge quickly and unpredictably. AI governance teams should embrace a mindset of adaptive resilience.

Start with the basics. Inventory AI deployments and assess where a failure would have the most significant impact — for example, an AI model making customer-impacting decisions or handling sensitive data. Prioritize those for the strongest controls and monitoring. Train staff to be aware of AI-generated content threats — for example, teach employees how to spot an email from a deepfake CEO before they wire money. Incorporate AI scenarios into existing incident response and business continuity plans. Any failure by a third-party vendor can quickly become an organization's failure, so ensure they're contractually committed to security and privacy standards.

We are entering an era where managing AI is less about adhering to regulatory principles but more about hands-on risk management in real time. By blending responsible AI practices with robust security and incident response tactics, organizations can stay ahead of emerging threats without stifling innovation.

The message for AI governance professionals is clear: Don't just audit for bias; look for breaches. Don't just design for fairness; plan for failure modes. In short, governance must be practical and technical. If AI risks are treated with the same seriousness and agility as other enterprise risks, AI's benefits can be confidently harnessed while staying ahead of its autonomous pitfalls.

The age of autonomous risk demands autonomous response. Those prepared to act swiftly and smartly will be the ones to keep their organizations secure, compliant and worthy of trust as AI transforms the business landscape.

Ankit Gupta, AIGP, CIPP/US, CIPM, FIP, is an author and speaker on cybersecurity and AI governance, with over 15 years of experience in cloud and identity security.