US judge rules that Anthropic's use of copyrighted content to train AI was fair use, but pirating books is step too far
You lose some, you lose some.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Every Friday
GamesRadar+
Your weekly update on everything you could ever want to know about the games you already love, games we know you're going to love in the near future, and tales from the communities that surround them.
Every Thursday
GTA 6 O'clock
Our special GTA 6 newsletter, with breaking news, insider info, and rumor analysis from the award-winning GTA 6 O'clock experts.
Every Friday
Knowledge
From the creators of Edge: A weekly videogame industry newsletter with analysis from expert writers, guidance from professionals, and insight into what's on the horizon.
Every Thursday
The Setup
Hardware nerds unite, sign up to our free tech newsletter for a weekly digest of the hottest new tech, the latest gadgets on the test bench, and much more.
Every Wednesday
Switch 2 Spotlight
Sign up to our new Switch 2 newsletter, where we bring you the latest talking points on Nintendo's new console each week, bring you up to date on the news, and recommend what games to play.
Every Saturday
The Watchlist
Subscribe for a weekly digest of the movie and TV news that matters, direct to your inbox. From first-look trailers, interviews, reviews and explainers, we've got you covered.
Once a month
SFX
Get sneak previews, exclusive competitions and details of special events each month!
The UK's Data (Use and Access) Bill has now passed, without the amendment that would've required AI tools to declare the use of copyrighted material, or any provision for copyright holders to 'opt-out' of their work being used as training data. The whole thing has left me wondering if there'll ever be something that AI can't gobble up and regurgitate. Well, a legal case in the US against AI firm Anthropic has produced an absolutely perfect punchline to this bleak episode.
A federal judge has ruled that Anthropic didn't break the law when it used copyrighted material to train the large language model Claude, as this counts as "fair use" under US copyright law, reports AP News. What's keeping Anthropic submerged in legal hot water, though, is how the company may have acquired that copyrighted material—in this case, thousands of books not bought but 'found' online. Long legal story short, AI can scrape copyrighted content—it just can't pirate it.
For context, this all began last summer, when authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson first brought their lawsuit against Anthropic.
That filing from August last year alleged, "Anthropic downloaded known pirated versions of Plaintiffs’ works." The full complaint goes on to read, "An essential component of Anthropic’s business model—and its flagship 'Claude' family of large language models (or 'LLMs')—is the largescale theft of copyrighted works," and that the company “seeks to profit from strip-mining the human expression and ingenuity behind each one of those works.”
A number of documents disclosed as part of legal proceedings unearthed concerns from Anthropic's own employees about the use of pirated books to train Claude. Though the company pivoted to buying physical books in bulk and painstakingly digitising each page for the AI model to gobble up, the judge ruled that the earlier piracy still needs to be legally addressed. As such, the ruling made by San Francisco federal court Judge William Alsup on Monday means that Claude can keep being trained on the author's works—but Anthropic must return to court in December to be tried based on the whole "largescale theft of copyrighted works" thing.
Judge Alsup wrote in this week's ruling, “Anthropic had no entitlement to use pirated copies for its central library." I'm no legal professional, but on this point I can agree. However, Alsup also described the output of AI models trained on copyrighted material as “quintessentially transformative," and therefore not a violation of fair use under the law.
He went on to add, "Like any reader aspiring to be a writer, Anthropic’s (AI large language models) trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different."
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Again, I'm not any kind of lawyer, and I'm definitely not offering legal advice, but yeah, I'm not buying this argument. I'd argue that a truly transformative, creative synthesis requires at least some understanding of whatever material you're imbibing. Large language models like Claude don't 'understand' texts as we do, instead playing an extremely complex game of word association.
In other words, Claude isn't creating, it's just trying to string together enough words that its training data say go together in order to fool a human into thinking the AI output they're reading is coherent copy. But what do I know? I'm just a writer—and Large Language Models may now enjoy the legal precedent set by this San Francisco case.

1. Best overall:
HP Omen 35L
2. Best budget:
Lenovo Legion Tower 5i
3. Best compact:
Velocity Micro Raptor ES40
4. Alienware:
Alienware Aurora
5. Best mini PC:
Minisforum AtomMan G7 PT

Jess has been writing about games for over ten years, spending the last seven working on print publications PLAY and Official PlayStation Magazine. When she’s not writing about all things hardware here, she’s getting cosy with a horror classic, ranting about a cult hit to a captive audience, or tinkering with some tabletop nonsense.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

