Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Privacy professionals often focus on the privacy risks associated with using artificial intelligence tools within organizations. However, as cybercriminals increasingly use AI to execute sophisticated cyberattacks, perhaps the perspective should shift to view good AI as a necessary technical and organizational measure to protect privacy. 

This poses the question: If cybercriminals are using AI to sharpen their swords to execute sophisticated cyberattacks, can privacy professionals justify technical and organizational measures as adequate for the current day threat landscape if they are not using sophisticated AI cybersecurity countermeasures as a shield to protect data?

The threat landscape

According to the FBI Internet Crime Report 2024, the cost of cybercrime rose to USD16.6 billion in 2024 — a notable 33% increase from the previous year, and a warning that cybercrime continues to be on the rise. 

ADVERTISEMENT

Radarfirst- Looking for clarity and confidence in every decision? You found it.

When you consider this alongside Anthropic's Threat Intelligence Report, which describes how cybercriminals are using generative AI to automate sophisticated threat campaigns, it paints a bleak picture. The Anthropic Report notes cybercriminals are weaponizing AI by embedding it into their operations and using it at all stages of a threat campaign, including data exfiltration, data analysis and the creation of false identities.

Unfortunately, AI lowers the barriers to entry for cybercriminals, meaning individuals who once lacked technical expertise can now use generative AI to scale sophisticated attacks and achieve far greater impact. This rise in scale and sophistication increases the strain on the defenses of those being targeted. 

Business email compromise and phishing shows how cybercriminals can use AI to scale sophisticated attacks. BEC and phishing campaigns work through cybercriminals using impersonation techniques and social engineering to steal money from unsuspecting victims and/or obtain login credentials that give them access to information technology systems and data. Once upon a time, those with an eagle eye may have felt they could identify this type of email-based attack in their inbox, but that becomes increasingly difficult when cybercriminals use generative AI. 

As the Anthropic Report highlights, cybercriminals use AI to create profiles of their targets, meaning they can scale the delivery of threat campaigns tailored to an individual, their role and/or their organization, making the attack increasingly difficult for humans to identify. In the same way technology has passed the point where the average human could reliably identify an AI-generated image, so too have we passed the point of being able to reliably identify sophisticated malicious email attacks.

These reports raise important questions. How should we strengthen defenses when AI enables cybercriminals to execute high volume, sophisticated attacks that are increasingly difficult for humans to spot? If cybercriminals are using AI to scale, working at the speed of machines, how do we keep up? Are we setting ourselves up to lose the war if we continue to move at the pace of humans, when our adversaries are moving at the speed of machines? And, how do we do all of this in a proportionate way that balances individual rights and freedoms? 

Remember, a cybercriminal only needs to get it right once to penetrate defenses and compromise data, whereas their prey need to get it right 100% of the time to keep data secure. Sadly, those odds favor the cybercriminal and it is only getting harder. 

The regulatory landscape

The consequences of losing that fight and suffering a breach are familiar to every privacy professional. The regulatory, reputational and potentially personal consequences for those affected can be severe. 

Privacy professionals are often considered stewards of building and maintaining trust, so embracing sophisticated AI cybersecurity tools may be an important part of technical and organizational measures to level the playing field. From a compliance perspective, the regulatory landscape is clear: organizations need to implement robust, state-of-the-art security measures to protect data, which supports the use of sophisticated AI cybersecurity tools designed to protect and preserve privacy. 

EU General Data Protection Regulation. Article 32 requires organizations to adopt "appropriate technical and organisational measures to ensure a level of security appropriate to the risk" taking into account "the state of the art" and "likelihood and severity" of the risk. 

The FBI and Anthropic reports highlight the likelihood of cyberattacks and the severity of the risk. If cybercriminals are using AI as state of the art tools to carry out their attacks, then it stands to reason that AI is the necessary state of the art technology for the defense.

Digital Operational Resilience Act. Article 6 requires financial entities to adopt "a sound, comprehensive and well-documented ICT risk management framework" including strategies and tools "necessary to duly and adequately protect all information."

Furthermore, Recital 34 encourages financial entities to share cyberthreat intelligence to improve collective resilience. AI models excel at ingesting and identifying patterns in large datasets, meaning federated threat intelligence could enable AI systems to detect new attack vectors to keep pace with cybercriminals.

NIS 2 Directive. Article 21 reinforces similar expectations, requiring "appropriate and proportionate technical, operational and organisational measures," again highlighting "the state-of-the-art" and a proportionality assessment to consider the likelihood and severity of incidents.

The obligation to protect data extends beyond just personal data under the GDPR, to broader digital and cyber resilience. More than ever before, privacy and cybersecurity are two sides of the same coin.

EU AI Act. The act provides an interesting perspective. Recital 54 explains that AI systems processing biometric data solely for cybersecurity and personal data protection should not be considered high-risk. The same position should reasonably extend to AI systems that process less sensitive types of personal data for the same purpose. 

Similarly, Recital 55 explains that AI systems used as components of critical infrastructure, but intended to be used solely for cybersecurity purposes, are not high-risk. These carve outs signal that using good AI for cybersecurity purposes supports wider regulatory compliance and a key purpose of the EU AI Act: to promote the uptake of trustworthy AI. 

Conclusion

The rising cost of cybercrime is clear, and AI is making it easier for cybercriminals to scale tailored attacks against their targets, giving them a greater likelihood of success. But like any tool or technology, AI itself is neutral — it can be used for good or bad. 

When cybercriminals choose to use AI for bad, it is important that privacy professionals consider whether their current technical and organizational measures can keep pace and keep up the good fight. There is a risk that when organizations consider AI use cases, they get into the habit of focusing on the risks AI poses to data protection without considering what the risks are of not adopting AI to protect data. 

In fact, using good AI is justifiably a necessary part of technical and organizational measures. Rather than posing a threat to data protection compliance, good AI — implemented with good governance — can provide the state-of-the-art defense necessary to uphold it. 

Jo Hand, CIPP/E, is legal director, privacy at Abnormal AI, advising on global privacy, cybersecurity and AI regulatory compliance.