Experts have expressed concerns that a landmark court ruling against Meta could spell trouble for the future of AI and social media harm research.
The court ruling saw a young woman sue the platform for creating deliberately addictive social media, damaging her childhood and mental health.
Jurors ruled in her favour, against both Google, as the owner of YouTube, and Meta, as the owner of WhatsApp, Instagram and Facebook, marking a landmark case outcome, which both tech giants have disagreed with and pledged to fight.
While the ruling is likely to influence similar cases, researchers into social media safety say they are worried it could create blocks to continuing their work.
“Companies may now view ongoing research as a liability, but independent, third-party research must continue to be supported,” said Kate Blocker, director of research and program at Children and Screens: Institute of Digital Media and Child Development, in the wake of the ruling.
She specifically has concerns about the ability to research artificial intelligence safety, the new frontier of digital popularity, just like social media before it.
“AI companies seem to be mostly studying the models themselves – model behavior, model interpretability, and alignment – but there is a significant gap in research regarding the impact of chatbots and digital assistants on child development,” she said.
“AI companies have a chance to not repeat the mistakes of the past – we urgently need to establish systems of transparency and access that share what these companies know about their platforms with the public and support further independent evaluation.”



