Thought for the week: To Claude or not to Claude, that is the question

The U.S. Department of Defense's designation of Anthropic's AI model Claude as a national security supply-chain risk could have broad implications for companies.

Contributors:
Brian Hengesbaugh
CIPP/US
Global Chair, Data and Cyber
Baker McKenzie
Editor's note
The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains.
This article is part of an ongoing series that will explore issues or recent developments in data, cybersecurity and artificial intelligence governance.
As you begin your week, check out this statement from Anthropic CEO Dario Amodei on "Where things stand with the Department of War." By way of brief background, President Donald Trump and the U.S. Department of Defense indicated publicly last week that the U.S. government would designate Anthropic's artificial intelligence model Claude as a supply chain risk to the country's national security. The DOD is apparently already using Claude in the military action in Iran and previously had used it in Venezuela.
The current disagreement, according to Anthropic, relates to two exceptions the company requested to the government's lawful use of Claude: "the mass domestic surveillance of Americans" and "fully autonomous weapons." Anthropic asked for the first exception because it believes such surveillance would be a violation of fundamental rights, and for the second because it does not believe the AI models are reliable enough to be used in such weapons. It's not clear whether the U.S. government has a specific desire to pursue either of those use cases, or whether there is a more generalized desire to avoid contractual restrictions on the use of Claude, particularly in what seems to be a time of war.
As of this writing, we have not seen the actual written order. Anthropic confirmed publicly 5 March that it had in fact received a letter from the DOD confirming it has been designated as a supply chain risk. There had been some uncertainty based on earlier public comments as to the scope of the order, but Anthropic's announcement reports the department's letter has a narrow scope.
Specifically, Anthropic reports that the authority's cited statute of the U.S. Code, 10 USC 3252, on requirements for relating to supply chain risk, is narrow. It also states the order as written "applies only to the use of Claude by (Anthropic) customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts (with the DOD for other products and services not using Claude)."
Is it correct that the order is narrow?
Yes, if Anthropic's announcement is accurate, the good news is that the order is focused on companies that do business with the DOD and use Claude as a direct part of those contracts. This should limit the range of companies the order would directly impact.
Notably, from a legal perspective, it would have been difficult to substantiate an order that is broader — for example, one that purported to prohibit anyone doing business with the DOD if they use Claude for unrelated purposes. The reality, however, is that the administration could have sought to drive for such an outcome on the basis of political will, even if such order would exceed this particular statutory authority.
Does the order carry some mixed messaging?
Yes, it seems so, based on what information is available. The legal grounds for making a determination in this context, under the code, would be that the ban is "necessary to protect national security by reducing supply chain risk."
However, it doesn't seem there is really a supply chain risk with Claude, as evidenced by the fact that the U.S. is using the AI model in its current military action, and wants to use it more. In fact, it seems that the U.S. government wants the contractual rights to use Claude without the requested restrictions.
So, the real messaging is along the lines of: "I don't know if you noticed, but we're at war, and we need your AI model to help, so we don't want your requested restrictions." That's a different message, and one we should all be attentive to, as it has broader implications.
What happens next with Anthropic?
Anthropic has indicated its plans to challenge the order in court. And, presumably, both the DOD and Anthropic will keep negotiating in the meantime, hopefully, to reach a resolution short of a court proceeding.
What are the broader implications for companies?
We can see several implications for companies.
Direct impact. Companies that contract with the DOD, and use Claude as a direct part of their contracts with the department, may be directly affected. These companies should assume the order will be released at some point soon, should know the scope of impact to them and should be working through a remediation plan, if feasible.
Other orders possible. Beyond the Anthropic order, a question would be whether we should prepare for other orders to come. This type of authority typically has been applied to foreign, non-U.S., suppliers that introduced some kind of cyber or supply chain risk, so it's somewhat unusual to see it applied here to a U.S. company.
Broader implications. It seems the broader implications relate to the reality that the U.S. is at war. That seems strange typing it even now, but it's true. I'm not a military expert — understatement — but I don't think we really need more fire power to succeed in Iran.
So, is the implication that we need to be ready for more or different conflicts? And, from a commercial perspective, do we need to be ready for more, not less, bending of authorities in the direction of political needs and beyond traditional bounds of statutory limitations? Seems like a real possibility.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
Brian Hengesbaugh
CIPP/US
Global Chair, Data and Cyber
Baker McKenzie



