Anthropic has lost its bid to temporarily block the Pentagon’s blacklisting as a lawsuit challenging that sanction plays out.
Following the Pentagon's classification of Anthropic as a supply-chain risk, the artificial intelligence (AI) company told the court that the designation was likely to impact its revenue and investors.
A judge in the D.C. Circuit court found that even though Anthropic’s finances were negatively impacted, it did not deal strongly enough to bypass the U.S. government on a national security issue.
“On one side is a relatively contained risk of financial harm to a single private company,” the appeals court said.
“On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict.
“For that reason, we deny Anthropic’s motion for a stay pending review on the merits.”
In the ruling on Wednesday, the court acknowledged that Anthropic “will likely suffer some degree of irreparable harm absent a stay”, but that the company’s interests “seem primarily financial in nature”.
Acting U.S. attorney general said the decision was “a resounding victory for military readiness” in a post to X.
An Anthropic spokesperson said in a statement after the ruling that the company is “grateful the court recognised these issues need to be resolved quickly” and that it’s “confident the courts will ultimately agree that these supply chain designations were unlawful”.
“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” Anthropic said.
This comes after a judge in California granted Anthropic a preliminary injunction, halting Trump’s wide ban on the use of Claude by all federal agencies.
The U.S. government is now appealing this decision, which has led companies to state that they would cease using Claude in government work to avoid potential problems.
Anthropic’s battle with the U.S. government began after the renewal negotiation fell through, as the AI company requested that its models should not be applied to fully autonomous weapons or domestic use surveillance.
The Pentagon wanted an agreement where the military could use Anthropic’s AI in all legal applications.
Anthropic’s AI model, Claude, is currently being used in Iran, as President Donald Trump had set a six-month deadline in February for U.S. federal agencies.to shift out of the platform.



