Amid the rise of artificial intelligence into all facets of commerce, cybersecurity is becoming a major talking point among business leaders seeking to secure the operation of AI models.

A range of stakeholders at the IAPP AI Governance Global North America 2025 shared their experiences confronting AI-generated attacks and what sorts of preparations they put toward building up resilient organizational defenses.

Identifying risks

Crowell & Moring Partner Matthew Ferraro outlined how malicious actors are leveraging AI in cyberattacks to further enhance the scope and complexity of the attack. Techniques including model extraction, data poisoning, and adversarial, backdoor attacks where hackers exploit small vulnerabilities within an existing model.

Additionally, Ferraro said cybercriminals are also weaponizing AI to create malware, also called "WormGPT," where they leverage a generative AI model to write polymorphic code that bypasses "signature-based defenses." He said the other ways hackers are utilizing AI to launch attacks are for research and reconnaissance to analyze open-source data to help tailor an attack. Cybercriminals also create sophisticated phishing scams designed to erode trust among employees and enable social engineering.

"AI really extends the threats from cyberattacks," said Ferraro, who served as the former Senior Counsel to the U.S. Secretary of the Department of Homeland Security from mid-2023 through the end of the Biden administration. "It's a testament to the quality of these models that they can produce really believable material."

Defense strategies

As organizational security leaders begin wrapping their minds around the depth and breadth of the AI threat matrix, practitioners opined such a process does not have to be overwhelming. OpenAI Senior Counsel Shannon Togawa Mercer presented concrete steps for organizations to evaluate cyber risk from their AI use. 

She recommended a holistic grasp for organizational AI system use, including contextual mapping and cataloguing, before moving to assessments covering risks, bias, model performance and privacy impacts. Once those steps are complete, she said security teams should provide some oversight through access controls and internal security policies before implementing meaningful governance practices.

Leveraging AI as a cybersecurity defense is a growing trend. Togawa Mercer noted AI developers are rolling out new models to help with threat detection, conduct adversarial testing and automate model patching to fix vulnerabilities.

"There are models that do try to prevent the abuse of AI," she said. "There are defensive uses of AI, which is essential in this ecosystem because defending is always harder than attacking. … We've got to lean into the use of these models so defenders can get the upper hand."

Organizational cybersecurity posture is important with respect to AI training and use, but the strength of those defenses must extend to the posture of third-party vendors and other partners.

The Cantellus Group Senior Advisor Nancy Morgan said even the most proactive organizations will be left vulnerable to risk if they fail to properly vet their business partners. She said re-examining existing governance structures around partnering generative AI and agentic AI will best position organizations to see what threats may linger on the horizon.

"You have to go deeper than your partner ecosystem. The third, fourth, fifth party. Because those may be the areas you have the most disruption or the most risk," Morgan said. "You may have to fundamentally reinvent some of your processes and your work structures. If you’re going toward agentic AI, you can really be fundamentally blowing up your organizational chart and that's going to also mean looking at what governance structures make the most sense here."

Legal certainty

While organizational stakeholders must still do substantial groundwork to assess the risks of integrating AI into their operations, regulatory guidance helps companies identify and evaluate potential harms stemming from AI use.

Palo Alto Networks Assistant General Counsel and Senior Director for Public Policy and Government Affairs Sam Kaplan, AIGP, said emerging AI governance and cybersecurity frameworks around the globe are motivating organizational stakeholders to reimagine cybersecurity risk assessments in the age of AI.

"We are seeing some regulatory frameworks, both specific to AI and also in the broader technological space that are requiring organizations and companies to consider tech issues, cybersecurity, IT purchases, and making each of these new responsibilities into goals and directives, which is changing the nature and scope of the AI and technological governance processes," Kaplan said.

The frameworks Kaplan referenced vary in scope from broad, prescriptive AI regulations, such as the EU AI Act and Colorado AI Act, to newer cybersecurity reporting frameworks containing AI risk evaluations and voluntary frameworks, like the U.S. National Institute of Standards and Technology AI Risk Management Framework. 

"Those regulatory movements are putting more visibility at the board of directors level, and really forcing AI governance, broader technical governance and data governance issues to become more integrated into the broader risk management strategy," Kaplan added.

Crowell & Moring's Ferraro said a balance of new regulation on the part of government, organizational resourcing and general vigilance will be necessary to the best defend against sophisticated attacks and even those that have yet to be developed. 

"This is an area where the most active attempts to regulate and success is going to lie somewhere in the triangle of government regulation, and activity by platforms and model developers to help ascertain what is true and what is false," he said. "The third point on the triangle is society will have to demand, as well, that we distinguish between truth and falsity."

Alex LaCasse is a staff writer for the IAPP.