The U.S. Department of Defense is reportedly close to formally cutting business ties with Anthropic, the AI company behind the Claude language model, and may designate it as a “supply chain risk” — a severe classification usually reserved for foreign adversaries — amid a deepening dispute over how AI can be used by the U.S. military.
What’s Happening
According to Axios, senior Pentagon officials say Defense Secretary Pete Hegseth is nearing a decision to label Anthropic a supply chain risk, a move that would effectively force all U.S. defense contractors to sever ties with the company if they wish to continue working with the military.
This escalation stems from a standoff over usage restrictions that Anthropic has placed on Claude. While the Pentagon wants the flexibility to employ AI for “all lawful purposes,” including in classified military operations and battlefield decision-making, Anthropic has resisted broad use authorizations that could see its technology tied to mass surveillance of Americans or autonomous weapon systems.
Why It Matters
A supply chain risk designation is more than symbolic. It would legally require companies that do business with the Defense Department to certify they are not using Anthropic’s technology — meaning the Pentagon’s widest pool of contractors could potentially drop Claude from their systems. That outcome could reverberate far beyond military procurement: Anthropic has said Claude is in use at eight of the ten largest U.S. companies.
Importantly, Claude remains the only AI model currently cleared for use on some of the Pentagon’s classified networks, where it has been integrated as part of broader systems via contractors such as Palantir. The model was also reportedly used in a classified U.S. military operation earlier this year, though details remain limited and have been recently disputed in public statements.
Anthropic’s Stance
Anthropic has publicly emphasized its commitment to ethical guardrails — opposing uses of AI for mass civilian surveillance or for developing weapons that operate without human oversight. The company has indicated a willingness to negotiate on terms, but only where it can maintain safeguards aligned with its responsible-use principles.
Despite the friction, negotiations between the company and the Pentagon are reported to be ongoing, even as defense officials press for broader permissions.
Broader Implications
This dispute crystallizes a broader tension at the intersection of national security and AI ethics: military agencies seek expansive access to powerful AI tools in pursuit of operational advantage, while leading AI developers insist on guardrails to mitigate risks related to civil liberties, autonomous weapons, and unchecked surveillance.
Experts have long warned that the integration of AI into warfare and intelligence systems carries profound strategic, ethical, and legal consequences — spanning everything from command decision-making to civilian harm prevention. This standoff may mark a watershed moment in who ultimately shapes the rules governing AI’s role in national defense: tech companies, defense institutions, or lawmakers and regulators yet to act.
What Comes Next
At present the Pentagon has not publicly confirmed a final decision, and discussions continue behind closed doors. However, if a supply chain risk designation is finalized, it could dramatically reshape the landscape for AI companies and defense partnerships — with ripple effects across industry and government alike.
https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro

