The Australian federal government has released fresh guidance on AI adoption in the workplace, giving businesses a clearer roadmap for responsible implementation at a time when companies globally are facing a wave of litigation over algorithmic hiring tools.
But will they work?
The National AI Centre's Guidance for AI Adoption condenses the previous 10 voluntary guardrails into six essential practices, replacing the 2024 Voluntary AI Safety Standard with a dual-structured approach - foundational advice for companies starting out and detailed implementation practices for mature operators.
“To help businesses put responsible AI into action, the guidance provides practical tools and templates, such as an AI policy template and an AI register template,” says the Department of Industry, Science and Resources.
The framework responds to findings from the 2025 Responsible AI Index, which revealed a persistent gap between organisations that agree with ethical AI standards and those actually putting them into practice, with smaller businesses struggling particularly with resource-intensive governance requirements, according to analysis by Allens.
Adoption patchy despite the hype
A Reserve Bank survey released this week found enterprise-wide AI transformation remains the exception.
About two-thirds of the 100 medium and large firms surveyed have adopted AI in some form, but nearly 40% reporting usage is still minimal - largely summarising emails or drafting text with off-the-shelf tools like Microsoft Copilot or ChatGPT, as reported by The Conversation.
Less than 10% have embedded AI into advanced processes such as fraud detection, a pattern that reflects global trends - McKinsey's 2025 report found nearly all companies are investing in AI, but just 1% consider themselves mature in deployment.
Organisations seeing real returns are redesigning workflows rather than just bolting AI onto existing processes - Microsoft reported in July that clients such as Gulf-based resources giant Ma'aden are saving up to 2,200 hours monthly through Microsoft 365 Copilot integration, while HELLENiQ ENERGY boasted a boost in productivity of 70% and cut email processing time by 64%.
Legal pressure intensifies
The cautious approach makes sense when considering the legal exposure mounting in American courts.
In May, a California federal court granted preliminary certification in Mobley v. Workday, a class action alleging the HR software firm's AI screening tools discriminated against job applicants over 40, with the plaintiff claiming he was rejected from more than 100 positions over seven years, sometimes receiving automated rejections within an hour of applying.
The case potentially affects millions of job seekers who've passed through Workday's platform, after the court found the company wasn't simply implementing employer criteria but actively participating in decisions by recommending which candidates moved forward.
Similar complaints are stacking up - in August, a plaintiff filed suit against Sirius XM Radio claiming its AI hiring tools relied on data points like postcodes and educational institutions as proxies for race, resulting in 149 out of 150 applications being rejected.
The ACLU has also filed charges against Intuit over a HireVue video interviewing platform that allegedly disadvantaged a deaf employee applying for promotion - the system's AI feedback reportedly told her to practise active listening.
Six practices for getting it right - according to the government
The new Australian framework establishes six core practices:
- Decide who is accountable
- Understand impacts and plan accordingly
- Measure and manage risks
- Share essential information
- Test and monitor
- Maintain human control
Impact assessments are positioned as key to translating principles into action, though these remain underused and difficult to operationalise, creating a widening gap between rapid AI uptake and consistent risk oversight.
The Australian Signals Directorate has also released advice on AI supply chain risks, highlighting cybersecurity challenges from the complex ecosystem of models, data, libraries and cloud infrastructure that AI depends on.
For Australian businesses, the message is clear - AI governance isn't optional anymore, and with California's new automated decision-making regulations taking effect in October 2025 and Australia's own regulatory environment maturing, companies deploying AI tools without proper oversight are increasingly exposed.
The productivity gains are genuine - but so are the legal risks.



