Federal Communications Commission Chairman Brendan Carr said Anthropic “made a mistake” in its dealings with the Department of Defence (DoD) following the United States government blacklisting the AI firm.
Anthropic has been in tense talks over the terms of its contract with the Pentagon.
The startup asked for assurance that its technology would not be used for domestic mass surveillance of Americans or for fully autonomous weapons.
The DoD wanted the AI firm to allow the military to use the models across all lawful use cases.
Talks stalled last week after Anthropic CEO Dario Amodei said the company “cannot in good conscience” allow the use of its models under these conditions.
In response, Carr told CNBC that Anthropic made a mistake.
“There’s obviously rules of the road that are in place that are going to apply to every technology that the Department of War contracts with,” he said.
President Donald Trump then ordered every U.S. government agency to “immediately cease” use of Anthropic’s technology.
Defense Secretary Pete Hegseth escalated matters with a post to X calling Anthropc a “supply-chain risk to national security”.
“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” he posted.
In response to being blacklisted, Anthropic said it was “deeply saddened” by these developments.
“We have tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above,” the company said.
“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company.”
Hours after the blacklisting, OpenAI CEO Sam Altman said his company has agreed to the DoD’s terms regarding the use of its AI models.
Altman said OpenAI “shouldn’t have rushed” its deal with the Department of Defence, adding that it “looked opportunistic and sloppy.”
OpenAI later revised the terms of agreement, clarifying that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”



