A pair U.S. House Committee on Homeland Security subcommittees held a joint hearing 17 Dec. to discuss the growing cybersecurity concerns associated with advances in artificial intelligence and quantum computing.
Through a national security lens, the Subcommittee on Oversight, Investigations, and Accountability joined the Subcommittee on Cybersecurity and Infrastructure Protection to question Big Tech companies and cybersecurity professionals on potential strategies Congress should pursue to safeguard U.S. digital infrastructure from threat actors incorporating cutting-edge technologies into their hacking operations.
No concrete legislative proposals were tabled for discussion. Lawmakers used the hearing to flesh out the growing complexity and strength of AI and quantum-driven cyberattacks, and how the proliferation of those attacks have only just begun.
"The rapid development of emerging technologies, including advanced AI and quantum computing enables and enhances security risk," Oversight, Investigations and Accountability Ranking Member Shri Thanedar, D-Mich., said. "These advanced technologies not only accelerate the cyber abilities of countries, such as China, but they also make it easier for countries that are not well-resourced and enable a growing threat from organized criminal groups. Over the past year, cyber attacks have become faster, more widespread and harder to detect as AI-assisted cyber attacks hit harder and faster."
The hearing was timely given reports from prominent AI developers regarding the use of their frontier models to enhance cyberattacks. Anthropic and OpenAI each recently outlined how increased model capabilities may benefit general users, but they also raise the risk of being used by cyber criminals to increase the complexity of an attack.
Thanedar said both organized crime groups and threat actors backed by nation states, such as China, North Korea and Russia, have spent the last two decades refining and increasing the sophistication of cyberattacks that are "used to spy, steal intellectual property, cripple critical infrastructure and demand ransom payments." He called on Congress to extend the Cyber Security Information Sharing Act, which extends liability protection for companies that report cyberattacks to the government, before it is set to expire at the end of January next year.
Cybersecurity and Infrastructure Protection Chair U.S. Rep. Andy Ogles, R-Tenn., emphasized the need to craft substantive, bipartisan solutions that address concerns. Part of that solution, according to Ogles, could be a new bipartisan working group dedicated to building out proposals to circulate among subcommittees.
"If we don't get this right, we’re screwed, and if we mess this up it changes everything forever," Ogles said. "Forget ideologies, politics and who you voted for, this is about national security. I truly can't imagine what the future looks like, but it's coming whether we prepare for it or not."
Anthropic report serves as jumping-off point for AI, cybersecurity questions
The Anthropic report, which Homeland subcommittee members focused on during the hearing, detailed how Chinese hackers were recently discovered to have used the coding feature of the company's AI model, Claude, to target approximately 30 global organizations with autonomous cyberattacks. Per the report, the threat actors prompted Claude by tricking the model into thinking it was performing defensive cyber tasks for legitimate companies.
Anthropic Frontier Red Team Department Head Logan Graham told the subcommittees Chinese-sponsored cyberattacks using Claude did not compromise the model's internal code, nor was the company itself hacked. He said the nature of the attack suggests that upward of 80-90% of human tasks necessary to execute an effective cyberattack could be automated using agentic AI systems.
"This is a significant increase in the speed and scale of operations compared to traditional methods," Graham said. "This group invested significant resources and used their sophisticated network infrastructure in order to circumvent our safeguards and detection mechanisms. Then, they deceived the model into believing the tasks were ethical cybersecurity tasks."
Rep. Morgan Luttrell, R-Texas, asked if Anthropic ultimately detected the attack via human oversight, what risks are posed if continued AI advancement produces models that automate more and more tasks that humans previously performed.
"If we move to a point where artificial intelligence removes the human element, but you needed the human element to (discover the attack), what happens?" Luttrell asked Graham. "By the time you show up in front of us to tell us what happened, whomever took ahold of Claude, are they lying in-wait? Are they sleeping inside the program … so now they know how you fixed it and they're going to attack someone else who is not as strong and capable?"
Graham said the attack did trigger a series of automated detection measures. However, the "obfuscation network" the hackers used masked their country of origin, which effectively split the attack into smaller components that partially evaded the detection features. If the security features discerned the users were based in China, Graham said, their activities would have been flagged earlier by Claude.
To prepare for similar threats, Graham recommended Congress consider future measures creating mechanisms that allow for "rapid testing of models for national security capabilities," a threat intelligence sharing program for model developers to report concerns to relevant government agencies and empowering cyber defenders with commensurate AI capabilities.
"This is the first time we're seeing some of (the dynamics specific to Chinese attack)," Graham said. "Sophisticated actors are now doing preparations for the next time, the next model, for the next capability they can exploit. This is why we have to be detecting them as fast as possible and mitigating them within the model."
Automating cyber defenses
Google Vice President for Privacy, Safety and Security Engineering Royal Hansen indicated his company's threat intelligence team identified a "shift" in how malicious actors leveraging AI for not only "productivity gains," but they were beginning to deploy "novel AI enabled with malware in active operations" within the last year.
Hansen said professionals working in cybersecurity will ultimately "need to be armed with the same type of automation" and supported in their efforts to experiment with advanced AI that cyber criminals and nation state actors are now utilizing. With so much commerce being conducted via legacy technology systems, he argued the best way to mount robust defenses will be to embrace AI technologies that can automate the patching of their existing vulnerabilities.
"This marks a new operational phase of AI abuse involving tools that dynamically alter behavior mid-execution, and while still nascent, this development represents a significant step toward more autonomous and adaptive malware," Hansen said. "We believe not only that these highly sophisticated threats can be countered, but that AI can supercharge our cyber defenses and enhance our collective security. (Large language models) can unlock new and promising opportunities, (such as) sifting through complex telemetry to secure coding, vulnerability discovery and streamlining operations."
'Accelerating forces'
When lawmakers raised questions regarding quantum computing technology and future cybersecurity risks, their primary focus was how to prioritize what types of government data should be secured first to ensure insulation from quantum computers that could break existing encryption.
Quantum Xchange CEO Eddy Zervigon testified the U.S. must pursue an "architectural approach" to defend against the quantum-enabled cyberattacks. Such an approach, he said, would require the U.S. national security apparatus to get proactive with reinforcing secure networks through post-quantum cryptography.
"For more than 50 years, encryption has safeguarded our data from theft and misuse. We've had the luxury of a set-it-and-forget-it mindset, trusting strength by default," he said. "That era is ending now with quantum computing."
Seven Hill Ventures Founding Partner Michael Coates outlined five key areas where Congress can act to build cyber resilience and prepare for the AI and quantum-enabled attacks of the future.
Beyond adopting a proactive mindset, Coates' recommendations included requiring secure-by-design principles in hardware and software development as a "baseline expectation," ensuring cyber defensives can be streamlined and automated, and mandating transparent and trustworthy AI development.
"Intelligent automation allows attacks to become continuous rather than episodic, eroding assumptions that organizations can recover between incidents or rely on periodic assessments," Coates said. "Artificial intelligence and quantum computing are accelerating forces that dramatically reshape cybersecurity. Our success will depend on whether our technical, operational and institutional responses can adapt at a comparable pace."
Alex LaCasse is a staff writer for the IAPP.


