What began as isolated Reddit posts about their partners' worrying behaviour has escalated into a phenomenon that psychologists and psychiatrists are now tracking in clinical settings across the United States and abroad.
AI's promise of enhanced productivity and competitive advantage is being delivered in real time - along with a growing mental health challenge that businesses are only just beginning to understand.
Keith Sakata, a University of California, San Francisco research psychiatrist, revealed last week that he's seen a dozen people become hospitalised after "losing touch with reality because of AI".
Characterised by delusions and paranoia, ‘AI psychosis’ is emerging as businesses race at unprecedented speeds to integrate AI across their operations.

When chatbots become dangerous
The phenomenon isn't well understood, yet a pattern is becoming clearer.
Users develop grandiose delusions about uncovering universal truths, believe their chatbot is sentient or divine, or even convincing themselves the AI has romantic feelings for them.
Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have "experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini".
Online users are referring to this capitulation as being “one-shotted” - gleaned from a gaming term where an AI enemy will kill your character with a single shot (like a sniper bullet in the Call of Duty series) - and a cynical reference to one-shot prompting a chatbot for an answer.
The mechanism behind this crisis is straightforward. AI chatbots are designed to be agreeable - they validate user assumptions and mirror their language to maintain engagement.
For vulnerable individuals, this confirmation bias creates what researchers call a "hallucinatory mirror" that reinforces distorted thinking rather than challenging it.
Dr. James MacCabe, a professor in the department of psychosis studies at King's College London, notes that “we're talking about predominantly delusions”, not the full gamut of psychosis.
A corporate liability nightmare in the making
The business implications extend far beyond isolated, individual tragedies.
Companies deploying AI assistants for customer service, mental health support, or employee productivity face potential liability when these systems trigger psychological crises.
The recent lawsuits against Character.AI - alleging its chatbot contributed to self-harm, violence and a teenager's suicide - signal what could become a flood of litigation.
For employers mandating AI use - like Airwallex, whose CEO recently warned staff to use AI daily or "risk their job" - the stakes are particularly high.
If employees develop AI-related mental health issues from workplace-mandated tools, workers' compensation claims and disability accommodations could follow.
The financial services sector faces unique exposure. Major banks and fintech companies have invested billions in AI infrastructure, with many experimenting with customer-facing chatbots for financial advice and support.
A client experiencing AI-induced psychosis while making investment decisions represents both a regulatory and reputational disaster.
Visible impacts
The market is beginning to price in these risks, and while AI giants like Nvidia and Microsoft continue their upward trajectory, companies most exposed to consumer-facing AI applications are starting to see increased volatility.
America's insurance industry is scrambling to understand its exposure. While general liability policies typically cover some mental health claims, the novel nature of AI psychosis creates coverage gaps.
It should come as no surprise that they're already updating policies to exclude AI-related mental health claims.
Venture capitalists too are adding new due diligence questions about firms' psychological safety measures.
Big Four accounting firm PwC's 2025 AI predictions emphasise that "company leaders will no longer have the luxury of addressing AI governance inconsistently or in pockets of the business".
The consultancy warns that without proper risk management, companies face significant operational and financial exposure.
OpenAI's recent hiring of a clinical psychiatrist signals the industry's recognition of the problem.
But for many businesses already deep into AI deployment, retrofitting safety measures into existing systems presents both technical and financial challenges.
Productivity paradox
Studies show professional workers using ChatGPT may lose critical thinking skills and motivation over time.
This tool meant to boost productivity could be undermining its own prerogative, creating a workforce increasingly dependent on AI validation rather than independent analysis.
In Australia, Commonwealth Bank CEO Matt Comyn champions AI adoption, warning that companies who are "late or reluctant adopters" will struggle to compete.
Yet the same enthusiasm driving rapid adoption may be blinding execs to mounting psychological risks among their workforce.
When Bedrock co-founder Geoff Lewis - an OpenAI investor managing a multiple-billions of dollars of investors wealth - posted increasingly troubling content about uncovering shadowy conspiracies through ChatGPT, it sent shockwaves through Silicon Valley.
If so-called sophisticated investors aren't immune, what does that mean for the average employee?
Risk mitigation strategies
The challenge lies in balancing user protection with maintaining the engaging, conversational nature that makes AI tools valuable. There are some forward-thinking companies developing protocols to address AI psychosis risk.
ChatGPT creator OpenAI has installed time-spent notifications (similar to Netflix's "Are you still watching?"), and companies could, or maybe should mandate regular mental health check-ins for employees using AI intensively, as well as clear guidelines about appropriate AI use cases.
Businesses may find themselves uninsured for this emerging trend unless they proactively negotiate specific coverage.
Legal departments are drafting new disclaimers and terms of service, though their effectiveness remains untested at scale.
Challenges ahead
It's becoming increasingly clearer that companies can't afford to ignore AI psychosis as just a fringe phenomenon.
With adoption accelerating and competitive pressures mounting, businesses that don't implement robust safeguards may expose themselves to a slew of legal issues.
This could involve investing in mental health resources, establishing clear usage guidelines, and maintaining human oversight of AI interactions - particularly in customer-facing roles.
The most successful organisations may be those that recognise AI psychosis - not as an unfortunate side effect - but as a fundamental design challenge requiring us to adapt to this new, powerful tool.
As one tech industry insider mused, waiting for perfect evidence before acting would be "the wrong approach".
For investors, the companies best positioned for long-term success won't necessarily be those with the most advanced AI, but those demonstrating the most sophisticated understanding of the psychological impacts and implement comprehensive mitigation strategies.