Anthropic CEO Dario Amodei has returned to negotiations with the United States Department of Defense, days after talks collapsed and the White House directed federal agencies to stop using the company's AI tools.
Amodei is in discussions with Emil Michael, under-secretary of defence for research and engineering, in an attempt to salvage the relationship, the Financial Times reported, citing anonymous sources - with Bloomberg subsequently corroborating the renewed talks.
The stakes are massive: Anthropic is preparing to go public. A supply-chain designation that lingers through an IPO roadshow is not merely reputational - it reprices the company's enterprise revenue assumptions entirely.
Investors in AI infrastructure, or anyone holding OpenAI-adjacent positions, should note that the classified-network contract Anthropic vacated did not disappear. It just moved.
It all kicked off a week ago…
Anthropic CEO Dario Amodei told staff that near the end of negotiations with the U.S. Department of Defense, the Pentagon offered to accept the company's contract terms on one condition: delete a specific phrase about "analysis of bulk acquired data" - a clause Amodei said was "exactly matched the scenario we were most worried about".
Then, on 27 February, President Donald Trump ordered every federal agency to immediately stop using Anthropic's technology.
Defence Secretary Pete Hegseth had designated the company a supply-chain risk to national security - a label typically reserved for foreign adversary-linked entities - and within hours, OpenAI jumped into the gap and announced its own classified-systems deal with the Pentagon.
How Claude ended up in the classified stack
Rewind to November 2024, where Anthropic first partnered with Palantir and Amazon Web Services to supply Claude to defence and intelligence systems, including classified environments.
By July 2025, the CDAO had formalised a two-year prototype agreement worth up to US$200 million - and Claude became the first AI model deployed on the military's classified networks, ahead of OpenAI and Google, which held deals only on unclassified systems at that point.
The original contract included explicit restrictions: no use of Claude for mass domestic surveillance, and no deployment in fully autonomous weapons systems.
Hegseth's January 2026 AI strategy memo, which demanded "any lawful use" language across all DoD AI contracts, put those restrictions directly in conflict with the Pentagon's position.
Timeline to blacklist
- Late 2024: Anthropic partners with Palantir and AWS to deploy Claude in classified DoD environments.
- June 2025: Claude Gov launched specifically for national security customers.
- July 2025: The US$200 million Pentagon contract formalised through the CDAO.
- January 2026: Defence Secretary Hegseth issues an AI strategy memo requiring "any lawful use" language in all DoD AI contracts.
- 24–26 February 2026: The Pentagon sets escalating ultimatums; Hegseth meets personally with Amodei; threats of Defense Production Act use and a supply-chain risk designation emerge.
- 27 February 2026, morning: Sam Altman publicly states he shares Anthropic's red lines on surveillance and autonomous weapons.
- 27 February 2026, 5:14 p.m.: Hegseth announces on X that Anthropic has been designated a supply-chain risk. Trump orders a government-wide ban. OpenAI announces its classified-systems deal with the Pentagon the same evening.
- 5 March 2026: Amodei reopens discussions with the Pentagon.
The clause that broke the deal
Anthropic's two stated limits were: no fully autonomous weapons, given that current frontier AI models are not considered reliable enough for that application; and no mass domestic surveillance of Americans.
The Pentagon characterised both as overreach and held its position on unrestricted access for any lawful purpose.
The underlying disagreement was legal.
Anthropic argued existing law has not kept pace with AI capabilities, and that the technology could significantly expand the legal collection of publicly available data - social media posts, geolocation, bulk signals.
The Pentagon's view was that existing statutes already addressed those concerns, making additional contract-level restrictions unnecessary.
OpenAI's deal and what it covers
By Friday night, Altman had reached an agreement with the Pentagon for deploying advanced AI in classified environments, with stated protections against surveillance and autonomous weapons use - provisions similar to those Anthropic had sought.
Altman later said the timing "looked opportunistic and sloppy."
OpenAI's contract ties its restrictions to existing law and DoD policy, rather than the absolute contract-level prohibitions Anthropic had pursued - a distinction legal analysts say matters, given that law and policy can be amended without renegotiating the contract.
Who gains, who is exposed
OpenAI holds the classified-network contract Anthropic vacated and is the direct commercial beneficiary.
Palantir faces transition costs, with analysts noting that replacing Claude in classified workflows will divert resources from growth activity.
Google's Gemini for Government and xAI's Grok for Government are positioned to absorb further DoD work, though neither has Claude's established footprint in classified environments.
For Anthropic, the supply-chain designation poses a broader commercial risk than the contract loss alone - around 80% of its revenue comes from enterprise customers, a portion of whom hold or seek Pentagon-adjacent work.
On the consumer side, Claude reached number one on the U.S. Apple App Store within 24 hours of the ban, with a social media campaign against ChatGPT gaining traction simultaneously - a development that carries weight for a company preparing to go public.
As Amodei and the Pentagon are now back in talks, keep an eye out for steady newsflow on this.



