Anthropic says it has identified thousands of 'fraudulent accounts' taking Claude and 'extracting its capabilities to train and improve their own models'
'Industrial-scale distillation' is underway, apparently.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Every Friday
GamesRadar+
Your weekly update on everything you could ever want to know about the games you already love, games we know you're going to love in the near future, and tales from the communities that surround them.
Every Thursday
GTA 6 O'clock
Our special GTA 6 newsletter, with breaking news, insider info, and rumor analysis from the award-winning GTA 6 O'clock experts.
Every Friday
Knowledge
From the creators of Edge: A weekly videogame industry newsletter with analysis from expert writers, guidance from professionals, and insight into what's on the horizon.
Every Thursday
The Setup
Hardware nerds unite, sign up to our free tech newsletter for a weekly digest of the hottest new tech, the latest gadgets on the test bench, and much more.
Every Wednesday
Switch 2 Spotlight
Sign up to our new Switch 2 newsletter, where we bring you the latest talking points on Nintendo's new console each week, bring you up to date on the news, and recommend what games to play.
Every Saturday
The Watchlist
Subscribe for a weekly digest of the movie and TV news that matters, direct to your inbox. From first-look trailers, interviews, reviews and explainers, we've got you covered.
Once a month
SFX
Get sneak previews, exclusive competitions and details of special events each month!
The question of what data AI models are trained on, and the legitimacy of that data, is a thorny one. Anthropic found itself defending its use of copyrighted material to train its Claude AI in the US last year, a case that eventually resulted in a ruling that its copyrighted scraping fell under fair use privileges.
However, the company eventually agreed to pay a $1.5 billion settlement in regards to claims that it pirated copies of several author's works. I mention this, because Anthropic has recently taken to X to complain about "industrial-scale distillation attacks" on Claude, perpetrated by what it says are over "24,000 fraudulent accounts" that have generated over 16 million exchanges with the AI chatbot, thereby "extracting its capabilities to train and improve their own model."
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax.These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.February 23, 2026
Which, as far as Anthropic is concerned, really isn't on. It identifies DeepSeek, Moonshot AI, and MiniMax as the perpetrators of the attacks, and while it says that "distillation can be legitimate", it also declares: "Foreign labs that illicitly distil American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems."
I mean don't you basically train your models the same way, by sucking up half the internet?February 23, 2026
In a further post, Anthropic says: "These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community," before linking out to a news post on the topic.
The post goes into further detail regarding the discovery of the attacks, and also says that Anthropic was able to attribute "each campaign to a specific lab with high confidence through IP address correlation, request metadata, infrastructure indicators, and in some cases corroboration from industry partners."
Which, as X user AntonLaVay points out, sounds like Anthropic loudly declaring that it can de-anonymize its users with relative ease. That's perhaps a privacy-related point for another day.
In the meantime, though, it seems that while Anthropic is fine with training its own models on copyrighted data, other companies using Anthropic's work to train their own models is a serious problem.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
And while the foreign military angle is certainly an interesting one, I've got a feeling it might not engender the same sort of sympathy as that given to private individuals who claim to have had their work incorporated into the Claude AI behemoth. Just a thought.

1. Best gaming chair: Secretlab Titan Evo
2. Best gaming desk: Secretlab Magnus Pro XL
3. Best gaming headset: Razer BlackShark V3
4. Best gaming keyboard: Asus ROG Strix Scope II 96 Wireless
5. Best gaming mouse: Razer DeathAdder V4 Pro
6. Best PC controller: GameSir G7 Pro
7. Best steering wheel: Logitech G Pro Racing Wheel
8. Best microphone: Shure MV6 USB Gaming Microphone
9. Best webcam: Elgato Facecam MK.2

Andy built his first gaming PC at the tender age of 12, when IDE cables were a thing and high resolution wasn't—and he hasn't stopped since. Now working as a hardware writer for PC Gamer, Andy spends his time jumping around the world attending product launches and trade shows, all the while reviewing every bit of PC gaming hardware he can get his hands on. You name it, if it's interesting hardware he'll write words about it, with opinions and everything.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

