Organizations taking a proactive approach to managing artificial intelligence risk could help shape insurance providers' attitudes toward liability as the industry wrestles with how to cover AI-related incidents.

As AI technology has proliferated across the digital sphere, so has its involvement in cyberattacks and fraud. A series of high-profile incidents illustrated the financial and reputational stakes. Air Canada had to honor a discount its chatbot had incorrectly promised to a passenger. A scammer fooled an employee of the multinational firm Arup into paying them millions by using deepfake videos of their colleagues. And Google is being sued by Minnesota-based company Wolf River Electric after its AI Overviews feature falsely named it as a defendant in a lawsuit, causing a customer to cancel a contract.

Incidents like these have caused some insurance companies to be wary of providing coverage or even seek to exclude AI from their corporate policies altogether. There are initial signs insurers are starting to look at a company's AI practices when weighing whether to offer coverage, but the all-purpose nature of the technology means it will likely take some time for standard approaches to develop, industry members said.

ADVERTISEMENT

Syrenis ad, a privacy professional's AI checkilist

"I think that there's a lot of confusion and growth in this industry, because where these claims fit is still kind of being figured out," said Thomas Bentz, a partner at law firm Holland & Knight. "So, for example, if an AI program causes bodily injury, does that fall under your (commercial general liability), your general liability type of coverage, or does that fall on your cyber coverage?"

"You have got some of these gaps that really don't fit nicely into either program," he continued. "And so I think the industry right now is trying to figure out, how do we deal with that? How do we price for it? How do we make sure that it fits nicely in that box where it's supposed to go?"

Part of the challenge is that cyber insurance itself is still relatively young in the industry, Bentz said, arguing it has only existed in "substantive form" for 20 years and only as an enterprise risk solution for about 10 years.

It takes having unique claims coming in over time for insurers to develop views on what was meant to be covered by a policy, what could be added on as an endorsement, or what liabilities it does not want to cover, Bentz said. AI's recent ubiquity in the world means insurers do not have as much history to base their policies on.

Panos Leledakis, founder and CEO of the IFA Academy and a member of the National Association of Insurance and Financial Advisors, said peers in the cyber and commercial insurance industry are cautiously exploring what AI coverage could look like. 

Speaking from personal experience, he said factors they are considering include whether a company has basic AI governance or usage policies, what their data handling and access controls when AI is used look like and whether employees are trained around AI misuse and social engineering.

"That said, I cannot confirm that AI governance is yet a standardized or decisive underwriting criterion across the industry. It appears to be directional rather than formalized at this stage," he said.

But AI-related incidents are increasingly part of internal risk conversations, Leledakis said, particularly when it comes to deepfakes, synthetic identities, data leaks and unintended chatbot inputs. 

"While most insurers are not publicly labeling these as distinct 'AI incidents' yet, many are quietly treating them as extensions of cyber, fraud, professional liability, and errors and ommissions risk," he said.

To mitigate these risks, Leledakis said he is starting to see insurers look for stronger authentication and call-back protocols to counter deepfake fraud, human-in-the-loop requirements for AI-assisted client communications, restrictions on the use of public large-language models with sensitive or regulated data and increased focus on disclosing, logging and auditing AI outputs.

Leledakis characterized the moment as a transitional one, with the industry experimenting with what risks it is willing to carry. He said it is likely that governance and accountability around how to craft an AI-inclusive policy would start to develop more in the next one to two years.

A growing area of risk

But while best practices are developing, the risk of fraud and reputational and financial harm are growing. Coalition, a digital risks insurance company, named chatbots as an area of emerging risk in a 2025 report, based on an analysis of nearly 200 cyber insurance claims from 2023-25 and website scans of 5,000 businesses.

"Chatbots were cited in 5% of all web privacy claims. These claims alleged that chatbot providers intercepted communications between the customer and the website owner without consent," the report states.

The claims followed the same format, with complainants saying the opening message of the chatbot should have said the conversation was being recorded and relying on the Florida Security of Communications Act, according to the report. That law, passed in 1969, has become the basis for several "digital wiretapping" lawsuits in the state, including a recently filed class-action complaint against Nike. 

"Plaintiffs' attorneys have found repeatable strategies that make it easy to launch allegations of wrongful data collection, relying on the fact that everyday tools like tracking pixels, analytics scripts, and chatbots are deeply embedded across millions of websites," according to the report.

Coalition announced in December 2025 it will start offering coverage on deepfake-related incidents that lead to reputational harm under its cybersecurity policies. Response services such as forensic analysis, legal support for takedown and removing deepfakes and crisis communications assistance are included. It is available currently in Australia, Canada, France, Germany, Denmark, Sweden, the U.K. and U.S. markets.

Daniel Woods, a principal security researcher with Coalition, said the endorsement policy is getting ahead of an issue that is likely to become more widespread in the future.

"First these deep fakes were launched against politicians, then celebrities. And what we see is these trends tend to filter down from like high profile to lower, until it becomes like a mass market issue," he said.

Businesses that do not have AI incidents explicitly covered in their cyber insurance but do have errors and omissions coverage in their policies may be able to claim some incorrect outputs from chatbots fall into that realm, Woods said.

But he also noted the traditional ways of protecting a business' digital security may not apply to AI-related incidents. Digital security coverage, for example, might hinge on having measures to protect a company's network and individual computers. Companies can prevent privacy complaints by managing their data and consent collection practices.

But with deepfakes, "the way threat actors launch these attacks is they need something like 10 seconds of footage of a corporate leader speaking, or a video of them," Woods said. "You know, most businesses can't avoid that. You need your corporate leaders to go out and speak for marketing purposes."

Caitlin Andrews is a staff writer for the IAPP.