Artificial intelligence technology is rapidly expanding - and so are copyright concerns.
Recent weeks have been hectic in the realm of AI and copyright law, with companies like Disney filing suits against AI providers and cases being decided in favour of Meta and Anthropic. Given the current lack of clear regulation on training AI models on copyrighted material, legal precedents are now being shaped that could determine the future of the industry.
Disney and Microsoft enter the ring
Several new lawsuits have been filed against AI companies in recent weeks, including those from major film studios.
Last month, Disney and Universal sued AI image generator Midjourney, arguing it had trained its AI models on their copyrighted works without permission. According to these studios, Midjourney can produce images of their copyrighted characters, including those from Star Wars, The Simpsons, and Despicable Me.
Disney and Universal have also said that they had warned Midjourney’s counsel to prevent further copyright infringement, but Midjourney continued to release new versions of its model without addressing the issue. The studios are seeking US$150,000 in damages for each infringed work, with the suit listing more than 150, and an order halting further copyright infringement.
“Our world-class IP is built on decades of financial investment, creativity and innovation—investments only made possible by the incentives embodied in copyright law that give creators the exclusive right to profit from their works,” said Walt Disney Company chief legal and compliance officer Horacio Gutierrez.
“We are bullish on the promise of AI technology and optimistic about how it can be used responsibly as a tool to further human creativity. But piracy is piracy, and the fact that it’s done by an AI company does not make it any less infringing.”
A group of writers also filed suit against Microsoft in June, alleging the company used around 200,000 pirated books to train its Megatron AI model.
According to these writers, Megatron was “built to generate a wide range of expression that mimics the syntax, voice, and themes of the copyrighted works on which it was trained.” The group is seeking damages of up to US$150,000 for each copyrighted work Microsoft used.

Duelling decisions
Judges in the U.S. have split on whether AI companies’ use of copyrighted works could be considered fair use — and therefore legal — as can be seen in cases involving Anthropic and Meta. In both cases, groups of authors filed suits alleging these companies had trained their AI models on their copyrighted books without consent.
California federal judge William Alsup ruled last month that Anthropic’s AI training qualified as fair use. According to Alsup, the company’s Claude model’s use of these books was transformative, and Claude’s output could not be considered equivalent to the original works.
“The purpose and character of using copyrighted works to train LLMs [AI large language models] to generate new text was quintessentially transformative,” wrote Alsup. "Like any reader aspiring to be a writer, Anthropic’s LLMs are trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different."
Meta also largely prevailed in its case last month, as California judge Vince Chhabria determined that the authors had failed to prove their assertion that Meta’s use of their books to train its Llama model diminished their ability to license their works.
However, Chhabria emphasised that his ruling would only apply to the 13 authors who had filed suit against Meta, and did not indicate that Meta’s actions were fair use.
While Meta had claimed that training Llama on these books qualified as fair use, “this ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful,” Chhabria wrote. “It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.”
The copyrighted works in both these cases were also pirated by Anthropic and Meta for AI training purposes. In Anthropic’s case, Alsup noted that the company would still face trial for piracy. In Meta’s case, the same plaintiffs have another pending complaint alleging that Meta illegally distributing their books.
Getty Images changes course
Meanwhile, Getty Images has dropped copyright infringement allegations from its lawsuit against AI image company Stability AI.
The trial began in London’s High Court on 9 June, with Getty Images arguing Stability AI had violated its copyright by training its Stable Diffusion image generator on Getty Images’ photos. However, this training took place outside the United Kingdom on Amazon-owned computers.
Getty Images said in legal documents provided to Azzet that while the evidence presented at trial “confirms that the acts complained of within this claim did occur, no Stability witness was able to provide clear evidence as to the development and training process from start to finish, only evidence that these acts occurred outside the jurisdiction.”
The suit will now focus on allegations that Stability AI infringes Getty Images’ trademarks. A company spokesperson declined to comment further.
Getty Images is also pursuing a similar lawsuit against Stability AI in the United States, which currently retains the copyright infringement allegations.
