The Albanese government signed its first AI safety partnership with Anthropic on the same day the company confirmed its second major data leak in a week, exposing the full architecture of its flagship product to the public internet.
On Tuesday, Prime Minister Anthony Albanese welcomed Anthropic chief executive Dario Amodei to Parliament House to formalise a memorandum of understanding on AI safety, research and investment - the first formal arrangement under Australia's National AI Plan.
On Monday, Anthropic confirmed that a packaging error had exposed the entire source code of Claude Code - the AI coding tool that generates an estimated US$2.5 billion in annualised revenue - to a public software registry.
The timing was not ideal.
The Claude Code incident was Anthropic's second data spill in five days, following the accidental publication of close to 3,000 internal files through an unsecured content management system.
That earlier leak revealed details of an unreleased AI model codenamed Mythos, which the company's own draft documentation describes as posing "unprecedented cybersecurity risks."
Between the two incidents, the public could access 500,000 lines of Claude Code's source code across 1,900 files, including 44 unreleased feature flags, internal performance benchmarks and the orchestration logic behind the guardrails designed to prevent misuse.
Anthropic attributed both leaks to human error and said no customer data was compromised.
Partnership terms
The MOU is explicitly non-binding - it confers no preferential treatment in government procurement and carries no legal force.
It establishes a framework for Anthropic to share its Economic Index data on AI workforce adoption, collaborate with Australia's AI Safety Institute on model testing, explore data centre investment and support research at four institutions including ANU and the Garvan Institute, backed by A$3 million in API credits.
Minister for Industry Tim Ayres said the agreement sent "a clear signal to Australians that we are open for business, where investment aligns with Australia's priorities and Australian values."
Anthropic also committed to align any future Australian operations with the government's clean-energy and community-impact expectations for data centre developers, released on 23 March 2026.
The institutional gap
Australia's AI Safety Institute, announced in November 2025, received A$29.9 million in funding and is not yet fully operational.
The UK equivalent, launched in late 2023 with £100 million in initial funding, has since committed more than £15 million in alignment research grants alone and employs over 30 senior technical staff recruited from OpenAI, DeepMind and Oxford.
The UK institute's most recent Frontier AI Trends Report found that AI models can now complete apprentice-level cybersecurity tasks 50% of the time - up from roughly 10% in early 2024 - and that the complexity of tasks models can handle unassisted is doubling every eight months.
Australia's AISI has no statutory enforcement powers.
It sits inside the Department of Industry, Science and Resources - the same department responsible for attracting AI investment into the country.
The government also scrapped a previously funded external AI Advisory Body that would have provided independent oversight from industry, civil society and academia, replacing it with the AISI's in-house capability and what a government spokesperson described as "a more dynamic and responsive approach".
Mandatory guardrails for high-risk AI that Canberra was developing through 2024 were abandoned following Productivity Commission advice to treat new AI-specific regulation as a "last resort".
ACCC Senior Investigator Rosie Evans has argued that without an enforceable AI-specific regime, Australia may struggle to achieve the regulatory cohesion the government aspires to.
Bad timing?
Anthropic arrived in Canberra in the middle of an active legal battle with the U.S. Department of Defence, which designated the company a "supply chain risk" after contract negotiations broke down over whether the Pentagon could use Claude without restrictions on autonomous weapons and mass surveillance.
A federal judge in San Francisco granted Anthropic a preliminary injunction on 26 March, calling the designation "likely both contrary to law and arbitrary and capricious".
Amodei told the Canberra forum that democracies needed to retain the "upper hand" militarily as AI expanded, and that a "sophisticated" tax on AI-generated profits was inevitable to ensure the benefits reached workers displaced by the technology.
The Australian government has not publicly addressed either proposition.



