Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Anthropic has exposed how Chinese AI companies are stealing data from Claude to build their own LLMs!
Before that, let me remind you that a few days ago I posted that, "In terms of API usage, four out of the top five AI companies in the world are under Chinese control. Chinese-made AI models are producing outputs comparable to Claude, and API costs are about 8-10 times lower. That’s why their models are so widely used across the industry." But how did they achieve that capability?
Let’s try to find out in detail.
Anthropic’s allegations against DeepSeek, Moonshot AI (Kimi), and MiniMax are that they launched an industrial-scale extraction attack on the Claude model. In other words, they attempted to exactly copy Claude’s advanced capabilities and train their own models. For this, they opened nearly 24,000 fake accounts and interacted with Claude over 16 million times. Their goal was to master Claude’s advanced reasoning, agentic behavior, coding skills, and tool usage.
DeepSeek ran prompt campaigns through about 150,000 interactions to reverse-engineer Claude’s internal logic or step-by-step reasoning. They even used Claude as a grading system to evaluate their own model’s outputs! Meanwhile, Moonshot AI made about 3.4 million interactions solely to copy agentic reasoning and tool usage. And the most aggressive was MiniMax, which conducted roughly 130 million interactions, mainly focusing on coding and orchestration. Interestingly, whenever Anthropic detected and blocked them, they quickly shifted to a new Anthropic model within 24 hours and resumed data extraction.
They carried out this entire operation in a highly planned and decentralized manner. To evade detection, they used rotating IPs, shared payment methods, and synchronized activity across thousands of accounts. The scale of this operation has reached what is considered the largest documented AI model distillation campaign to date.
The geopolitical context behind this is also quite interesting. A few days ago, OpenAI warned the US government with a memo that Chinese labs are indirectly trying to access US frontier models.
But the question is, what will they do with all this stolen data?
The answer is that they will use Claude’s advanced data to make their own models more powerful. Processing this huge amount of data requires massive data centers and supercomputers, which consume a lot of electricity. Surprisingly, China added over 500 gigawatts of new power capacity just in 2025, nearly 10 times more than the United States!
In other words, they have devised a full plan to extract data from American AI models and run their AI infrastructure at an industrial scale. So far, none of these three accused Chinese AI companies have denied the allegations of data theft publicly.