Anthropic dodged a major copyright strike for using books to train its Claude AI, but the fight isn’t over.
A US judge ruled that training AI on authors’ books isn’t outright copyright infringement. The judge called Anthropic’s use “exceedingly transformative,” saying their AI learns to create new, different content rather than just copy.
The ruling came from a lawsuit by three authors, including mystery writer Andrea Bartz, who accused Anthropic of stealing their work to build its multi-billion dollar AI.
Judge William Alsup wrote:
"Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works, not to race ahead and replicate or supplant them — but to turn a hard corner and create something different."
"If this training process reasonably required making copies within the LLM or otherwise, those copies were engaged in a transformative use."
But the judge rejected Anthropic’s attempt to toss the case entirely. Anthropic still faces trial over keeping a “central library” of more than seven million pirated books.
Anthropic could owe up to $150,000 per copyrighted work if the court finds the pirated book library illegal.
The company, backed by Amazon and Alphabet, said it’s happy the judge recognized the transformative use but disagrees with trial on book copying and is considering next steps.
This is one of the first legal battles over how big AI models can train on existing material. Other fights include Disney and Universal suing AI art generator Midjourney, and the BBC mulling action over content use.
Judge Alsup added:
"If the authors had claimed the training led to infringing knockoffs replicating their works, this would be a different case."
The authors’ lawyer declined to comment. The case highlights the tense legal landscape as AI digs ever deeper into copyrighted media.