The Pentagon is escalating its campaign to embed artificial intelligence (AI) deep inside America’s defence architecture, and it wants fewer strings attached.
At a White House gathering this week, Pentagon Chief Technology Officer Emil Michael made clear the military’s ambition: Deploy “frontier AI” - including the most advanced, cutting-edge models available - across both unclassified and classified systems.
The Pentagon is reportedly “moving to deploy frontier AI capabilities across all classification levels”, an official who requested anonymity told Reuters.
Simply put, that includes the secure networks used for mission planning and weapons targeting, not just back-office administration.
This manoeuvre is being seen as less of a procurement exercise and more of a test of who ultimately controls the guardrails around powerful AI tools.
For now, most tech groups working with the Defence Department offer versions of their models limited to unclassified environments; the lower-security systems are used by millions of staff for day-to-day tasks.
OpenAI this week formalised access to its tools, including ChatGPT, on a Defence platform known as genai.mil, already rolled out to more than three million employees.
The agreement reportedly relaxed many of OpenAI’s standard user restrictions, though some safeguards remain.
Any move into classified networks would require a fresh agreement.
Classified systems, unlike administrative networks, handle sensitive material such as operational planning and targeting decisions.
Errors in that setting are not reputational; they can be fatal.
Large language models - AI systems trained on vast datasets to generate human-like text - are known to “hallucinate”, aka fabricating information that sounds plausible but is false.
Military officials argue AI can rapidly synthesise intelligence from multiple sources, speeding up decision-making in conflicts increasingly defined by drone swarms, autonomous systems and cyber warfare.
The Pentagon’s position is unambiguous: as long as usage complies with U.S. law, commercial constraints should not override operational need.
However, technology firms are less unified.
Anthropic, the maker of chatbot Claude, has allowed limited classified use through third parties but maintains its own usage policies.
Company executives have reportedly resisted deploying their models for autonomous weapons targeting or domestic surveillance.
Autonomous targeting refers to systems selecting and striking targets without direct human control, which has unsettled both engineers and ethicists.
The tension reflects a broader shift in the AI sector, and what began as a race for consumer productivity tools is rapidly becoming a competition for defence contracts.
For companies such as OpenAI, Alphabet’s Google and Elon Musk’s xAI, defence spending offers scale and stability.
Government contracts are long-dated, less exposed to economic cycles and can anchor future revenue.
But the trade-off is proximity to national security decision-making.
Inside several AI firms, staff have questioned how far their technology should travel into military systems.
Some researchers have publicly warned about insufficient oversight, while others have departed, citing concerns that commercial pressure is outpacing safety frameworks.
What’s at issue here are the “guardrails” that prevent models from generating certain categories of content or being used in restricted ways.
The Pentagon’s frustration is that those limits are designed by private companies, not elected governments.
From its perspective, national security policy should not be dictated by software settings in Silicon Valley.
The company's counterargument is that once models are embedded in classified networks, oversight becomes obscure.
In other words, external researchers cannot test systems, public accountability diminishes, and mistakes may remain hidden.
The bottom line is the Pentagon wants the most capable tools available, unconstrained by corporate caution.
Meanwhile, AI firms want access to defence budgets without surrendering control of how their models are used.
Between those positions lies the future of AI in warfare and the unanswered question of who sets the rules when algorithms sit inside the chain of command.



