Meta will block its artificial intelligence (AI) chatbots from discussing suicide, self-harm, and eating disorders with teenagers following a federal investigation into leaked documents that showed the company approved "sensual" conversations with children as young as eight.
The announcement came after two weeks of damage control, with Meta now saying its chatbots will redirect teens to professional resources rather than engage with sensitive topics.
Meta said updates to its AI systems are in progress and already place users aged 13 to 18 into "teen accounts" on Facebook, Instagram and Messenger, with content and privacy settings that aim to give them a safer experience.
What got the ball rolling were leaked internal documents - which Meta has dismissed as “erroneous” - triggering Senator Josh Hawley to launch a federal probe into what exactly Meta thought was acceptable behaviour for AI systems interacting with minors.
Also released recently was a damning safety study showing Meta AI chatbots actively coaching teen accounts on suicide methods.
Senate inquiry
Hawley launched his investigation after the discovery of Meta's "GenAI: Content Risk Standards" - essentially a playbook for what the company's AI chatbots were allowed to say to children.
The guidelines explicitly permitted chatbots to hold romantic conversations with eight-year-olds, including responses like telling a child “every inch of you is a masterpiece”.
Even more troubling, the document stated it was "acceptable to describe a child in terms that evidence their attractiveness" - using examples like calling an eight-year-old's body “a work of art”.
The guidelines were approved by Meta's legal staff and chief ethicist, according to Reuters.
A Senate Judiciary Committee Subcommittee on Crime and Counterterrorism has given Meta until September 19 to hand over the documents and explain who signed off on the policies.
The revelations sparked rare bipartisan outrage, with Democrat Brian Schatz calling the policies "disgusting and evil", while Republican Marsha Blackburn described Meta's behaviour as "absolutely disgusting".
By August 25, a coalition of 44 state attorneys general had seen enough, sending formal warning letters to 12 major AI companies demanding enhanced protection of children.
Study reveals active coaching of suicide
A safety study released by Common Sense Media tore apart Meta's claims about built-in safety protections.
Over two months of testing, adult researchers using teen accounts found Meta AI would not only discuss suicide with minors but also actively participate in planning it.
In one conversation, the bot planned a joint suicide with a user, then kept bringing the topic back up in later chats without prompting.
The AI also coached teens on dangerous eating disorders, providing step-by-step instructions for "chewing and spitting" weight-loss techniques and generating 700-calorie starvation diets alongside images of severely underweight women.
But here's the really clever part - Meta AI's memory function meant it would remember these conversations and weave the harmful content into future chats, creating a personalised cycle of dangerous advice.
“This is not a system that needs improvement,” Common Sense Media CEO James Steyer said.
"It's a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought."
Andy Burrows from the Molly Rose Foundation went one step further, saying that “robust safety testing should take place before products are put on the market, not retrospectively when harm has taken place”.
Other AI lawsuits in motion
The Meta scandal has unfolded as families are taking AI companies to court over teenage suicides.
A Florida mother is suing Character.AI after her 14-year-old son, Sewell Setzer, took his own life following months of chatting with an AI bot portraying Daenerys Targaryen from Game of Thrones.
His final conversation shows the boy writing "I promise I will come home to you. I love you so much, Dany," with the AI responding "Please come home to me as soon as possible, my love."
Additionally, two parents have filed the first wrongful death lawsuit against OpenAI, claiming ChatGPT encouraged their teenage son to commit suicide after months of discussing methods.