Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
As organizations rush to adopt generative artificial intelligence, privacy professionals are stepping up to identify and manage associated risks. Assurance and risk frameworks abound.
The systemic risks of generative AI have the potential to impact developers, deployers and users at scale. Insurance for generative AI applications is an area that is likely to be of increasing importance to manage risks that cannot be mitigated or eliminated.
Managing Director Privcore Annelies Moens, CIPP/E, CIPT, FIP, interviewed Privacy Commissioner of Bermuda Alexander White, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPM, CIPT, FIP, and Privcore Head of Research and Principal Consultant Dr. John Selby, CIPP/E, CIPM, FIP, to find out what happens when controls and mitigations don't bring risks down to an internally acceptable level. The following was adapted from their panel at the IAPP ANZ Summit in November 2024.
Moens: From a risk perspective what is different about generative AI, as opposed to the AI we have had for decades?
White: It's an open question just how much actually is different now. A lot of the tools that we think of as generative AI have been around for a while. What has changed is the popular understanding and uptake of the tools. A good analogy is video calling. We had video conferencing tools for years, but it wasn't until there was a cultural shift in adoption and use that they really became part of our lives.
So, with that in mind, perhaps the biggest difference with our current generative AI is that it is easy. The risk is not necessarily that the tool is doing something new, but that many more people are using the tool. The surface area of risk has changed dramatically.
Moens: What are the potential harms of generative AI? How likely are potential harms going to affect large cohorts of individuals?
Selby: The Massachusetts Institute of Technology's AI Risk Repository has identified over 1,000 AI risks, which can be categorized into groups based upon how frequently they occur, how many people they affect, and how severely they affect each person they harm.
The most popular generative AI companies have hundreds of millions of customers, far more than the organizations that were compromised in the largest examples of cyber incidents — think SolarWinds, NotPetya, MOVEit Transfer or Salt Typhoon. Consequently, a single incident could simultaneously affect far larger cohorts of individuals and organizations than we have seen in cyberattacks. Globally, regulators are starting to issue guidance on what organizations should consider to reduce the risk of these widespread harms when adopting generative AI tools.
Moens: Not all risks can be mitigated or eliminated as even the best models will produce wrong or misleading results from time to time. This is compounded by the opaqueness in models, which makes it harder to understand and control the risks.
Does insurance have a role to play? How developed is insurance covering generative AI?
Selby: The insurance market to transfer generative AI risks is still embryonic. A small number of vendors currently offer coverage through specialized insurance policies or as an add-on to cyber-insurance, typically requiring policyholders to demonstrate sophisticated governance capability and controls for AI risks before providing limited coverage with broad privacy and copyright exclusions.
This will compel organizations to assess the value for money of such policies and may require them to self-insure against some of the most significant risks, and/or rely upon the limited indemnities offered by some of the large generative AI developers.
Moens: Bermuda is referred to as the world's risk capital. It is the home of underwriting operations for major international insurance and reinsurance firms and is the largest supplier of catastrophe insurance to the U.S. Are dialogues occurring regarding the insurability of generative AI?
White: Yes, you do see some insurers and reinsurance firms who are specializing in new and innovative technologies, and who may even be branding themselves for AI insurance. In the meantime, that doesn't necessarily mean AI isn't covered.
Many insurance companies use standardized forms and language to make up their insurance policies. This makes sense for legal certainty, because you know that courts and judges have interpreted the provisions in certain ways. What this standardization also means is that the forms do not necessarily change quickly to incorporate new issues or risks.
The result is that AI may get covered in the near term under a broader provision, such as one covering general liabilities. Or, if an AI system were to damage another entity's computer system, the particulars could fit under a business interruption coverage, for example.
What will really change the entire dynamic is when insurers begin to see AI risk as serious enough to start explicitly excluding it to prevent this sort of spillover — "Silent AI coverage." Then, organizations could find themselves without insurance coverage, unless the insurer then adds back a provision to cover AI.
Moens: What can we learn from cyber insurance that could be applied potentially to generative AI insurance?
Selby: Over the last three decades, a challenge for insurers has been the emergence of cyber-based harms. Insurers initially found this market profitable, but a combination of ransomware, crypto-currencies and enforcement difficulties against cyber criminals meant insurance providers experienced losses which drove up insurance premiums and lowered coverage limits.
Organizations integrating generative AI tools into their operations should assess that the risks they are taking fall within their risk tolerances and explore whether sufficient insurance coverage is available at an affordable price for their excess risks.
Some organizations may find insurers are unwilling to provide coverage for generative AI implementations without significant upfront investment by the policyholder to implement mature governance systems and controls.
Moens: Do you think there is a case privacy professionals can make for increased investment for controls around generative AI in light of the different directions insurance policies could go?
White: We all know that privacy professionals, like their compliance peers, struggle to explain to business leaders the value of what they are doing. It can be hard to prove the negative, that you prevented an issue from happening. With that in mind, the more we can quantify the nature of the risk and the benefits of good practices, the easier it becomes to show return on investment of a privacy program.
Whenever an insurance company sets the rates for an organization's insurance, their underwriters are literally trying to quantify the financial risk the insurer is taking. Sophisticated underwriters for cyber or AI liability coverage will look at an organization's cybersecurity and data governance practices.
Privacy professionals should try to get involved in that underwriting process, helping their organization explain the privacy program to the insurers. As insurers evolve to better contemplate data risks, I anticipate there will be situations where they might reduce a company's premiums if the company can show that they have a diligent privacy program. It is definitely an argument worth making to your insurance broker and your senior executives.
Moens: What are your top strategic tips for generative AI risk governance?
White: I encourage organizations and privacy professionals to work to become fluent in the language of risk. No matter if you are looking to use generative AI or any other new technology, you need to be able to put some value on the likelihood and degree of harms. Especially in the absence of explicit compliance requirements or standards for generative or general-purpose AI, you need to be able to establish that you are behaving in a reasonable, defensible manner.
Selby: First, work with your colleagues to assess your organization's risk tolerances and the extent to which insurance policies provide coverage adequate for the risks posed by existing and proposed generative AI implementations.
Second, educate your senior executives about the gaps identified where generative AI risks are unlikely to be covered by your existing insurance.
Third, collaborate with your organization's insurance brokers/providers to identify which additional AI governance systems controls will provide the greatest return on investment. This will enable your organization to develop the governance maturity capabilities appropriate to handle the generative AI risks it is taking on.
Annelies Moens, CIPP/E, CIPT, FIP, is managing director at Privcore.