New AI model sparks alarm as governments brace for AI-driven cyberattacks

Anthropic's Claude Mythos is magnifying the friction developing between AI and cybersecurity.

Contributors:
Lexie White
Staff Writer
IAPP
The rapid advancement in artificial intelligence across sectors is challenging both regulators and technology companies as they work to confront a wide array of potential risks. The convergence of AI and cybersecurity is one area in particular generating increased friction.
Anthropic recently announced the launch of the Project Glasswing initiative, which would use the company's Claude Mythos Preview model to detect zero-day and other cybersecurity vulnerabilities and prevent data security incidents. The program was released to an exclusive group of partner companies, including Amazon Web Services, Apple, Google, JPMorganChase, Microsoft and NVIDIA, with the aim to help them get ahead of potential threats.
Anthropic characterized frontier AI capabilities as "likely to advance substantially over just the next few months," which makes Glasswing a necessary tool "for cyber defenders to come out ahead."
While Mythos will help increase cyber defense postures, governments and stakeholders are cautiously preparing for what happens when threat actors begin using similar detection technology.
According to the Financial Times, U.S. Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent met with U.S. banks this week to discuss the potential for the Mythos model to be used to conduct large-scale data breaches that could damage the financial sector.
Some financial institutions were already examining what the proliferation of AI-powered cyberattacks would mean for their operations. JPMorgan Chase CEO Jamie Dimon's annual letter called attention to growing cybersecurity threats and how "AI will almost surely make this risk worse." He also noted heightened risks from AI will require increased resources toward securing systems.
Politico reports the European Commission also flagged the security implications of Anthropic's tool, stating there are notable risks associated with cybersecurity technology that claims it can outperform humans when "finding and exploiting software vulnerabilities."
In response to the backlash, Anthropic agreed to slow the launch of the tool beyond its partner preview in order to assess security risks. Commission spokesperson Thomas Regnier said Commission officials welcome Anthropic's decision to temporarily halt the launch. EU regulators are currently working with the company on how it will safeguard the release of the tool to prevent potential misuse.
US appeals court rejects Anthropic's efforts to prevent supply chain risk labeling
A panel of judges from the U.S. Court of Appeals for the D.C. Circuit dismissed Anthropic's request to block the U.S. from designating the company as a supply chain risk amid national security concerns, Politico reports.
The Department of Defense designated the company as a supply chain risk after the Pentagon and Anthropic could not reach an agreement over the use of AI in certain military decisions.
The judges said while Anthropic could face financial loss from the designation, "the other side is judicial management of how, and through whom, the Department of (Defense) secures vital AI technology during an active military conflict."
The panel urged a final decision in the case should be accelerated to navigate the potential harm to the company. Anthropic said in a statement it is "grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful."
Florida attorney general to open ChatGPT probe
Florida Attorney General James Uthmeier announced his office will launch an investigation into OpenAI's ChatGPT after the chatbot was allegedly used in the planning of a 2025 school shooting at Florida State University.
According to Politico, the probe aims to determine how the chatbot impacts public safety and what safeguards are in place to prevent tools from aiding criminal activity. Uthmeier warned the office plans to subpoena other providers in an effort to expand AI enforcement.
"AI should exist to supplement, support and advance mankind, not lead to an existential crisis or our ultimate demise," Uthmeier said. "As Big Tech rolls out these technologies, they should not, they cannot, put our safety and security at risk."
The move comes after a U.S. jury previously found Meta and YouTube liable for design features that allegedly impacted underage users' mental health.
In a statement to Politico, OpenAI said it will cooperate with the investigation and highlighted its ongoing safety work that "continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery."
A day before Uthmeier's announcement, OpenAI launched its Child Safety Blueprint that looks to introduce a "practical path forward for strengthening U.S. child protection frameworks in the age of AI."
The blueprint includes feedback from the U.S. National Center for Missing and Exploited Children and the Attorney General Alliance's AI Task Force. The framework suggests U.S. AI efforts focus on preventing AI-generated child sexual abuse material, bolstering reporting mechanisms, and supporting AI companies as they implement "safety-by-design measures" within AI systems.
On behalf of the Attorney General Alliance AI Task Force, North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown said the strength voluntary frameworks like the new blueprint "depends on the specificity of its commitments and the willingness of industry to be held accountable against them." They welcomed the framework's "recognition that effective GenAI safeguards require layered defenses — not a single technical control, but a combination of detection, refusal mechanisms, human oversight, and continuous adaptation to emerging misuse patterns."
XAI files lawsuit over Colorado's AI legislation
XAI announced a lawsuit claiming Colorado's AI Act violates consumers' First Amendment rights by impacting what information AI tools can tell users, the Financial Times reports. The company claimed Colorado's legislation would negatively impact AI innovation and use, noting it would force companies to "embed the State's preferred views into the very fabric of AI systems."
Colorado's AI Act, which will go into effect 30 June, introduces obligations for AI developers to prevent discrimination within AI tools. The bill aims to require companies to notify the state if it identifies discrimination in its systems and allow consumers to correct personal data collected to prevent bias when used for sensitive purposes such as employment, housing decisions and education.
Gov. Jared Polis, D-Colo., previously said he has "reservations" about the bill. Though Polis supported the latest Colorado AI Policy Work Group framework to amend the AI Act by removing impact assessment requirements and implementing additional transparency obligations.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Lexie White
Staff Writer
IAPP
Tags:



