Anthropic Complains about IP Theft and Then Gives Its IP Away Via a Security Lapse
April 7, 2026
Let’s go back a few weeks. Earlier this year, I recall reading this Business Insider story: “Anthropic Says Deepseek And Other Chinese AI Companies Fraudulently Used Claude.” The news?
“Anthropic said the distillation efforts were “industrial-scale campaigns” that included roughly 24,000 fraudulent Claude accounts that generated over 16 million exchanges “in violation of our terms of service and regional access restrictions…. Distillation is the process of training a less powerful model on the output of a more powerful model. The practice is a legitimate way that many US companies use to train their models for public release. Increasingly, major US companies are also stating that their Chinese competitors are improperly using the practice to steal their work.”
The allegation is that Anthropic released updates to their models, then the Chinese companies copied them within hours. Another issue Anthropic identified is that bad distillation poses security issues, such as the development of bioweapons. Some people believe that Anthropic used other people’s information without permission to train its models. There was a lawsuit and Anthropic paid out $1.5 billion but didn’t admit any wrongdoing.
Is this a version of the pot calling the kettle discolored? Maybe it is what’s good for the goose is definitely not good for certain ganders?
Anthropic stated that China’s AI companies: Deepseek, Moonshot AI, and MiniMax used Claude to augment their own algorithms with distillation.
Now let’s think about what happened on or around March 30, 2026. Here’s a typical headline about Anthropic’s misfire: “Anthropic Leaks Part of Claude Code’s Internal Source Code.” That incident obviated the need to steal Anthropic’s intellectual property. The company could not get its act together and watched a couple of its digital circus animals wander off to be captured and processed by anyone with an Internet connection and a link to the code. Wasn’t Anthropic labeled a supply chain risk by the US government? Did Anthropic’s management lapse validate that US government statement?
The CNBC write up notes:
A source code leak is a blow to the startup, as it could help give software developers, and Anthropic’s competitors, insight into how it built its viral coding tool. A post on X with a link to Anthropic’s code has amassed more than 21 million views since it was shared at 4:23 a.m. ET on Tuesday [March 31, 2026]. The leak also marks Anthropic’s second major data blunder in under a week. Descriptions of Anthropic’s upcoming AI model and other documents were recently discovered in a publicly accessible data cache, according to a report from Fortune on Thursday, [March 26, 2026].
I know that the Big AI Tech or BAIT outfits have many highly intelligent people. But there is the nagging thought in the back of my mind that some people at the firm say and do some less than brilliant things.
Whitney Grace, April 7, 2026
Comments
Got something to say?

