Anthropic Accuses Chinese AI Labs of Model Copying, Putting Nvidia (NVDA) and U.S. Chip Controls at Risk

Anthropic has accused three major Chinese AI labs of running what it called “industrial-scale distillation attacks” on its models. The claim adds a new layer to the tech clash between the U.S. and China. Anthropic’s accusations also come as a senior Trump administration official claims that DeepSeek, the Chinese developer of R1, may have trained its next AI model on Nvidia’s Blackwell chips in violation of policy restrictions.

Claim 50% Off TipRanks Premium

  • Unlock hedge fund-level data and powerful investing tools for smarter, sharper decisions

  • Stay ahead of the market with the latest news and analysis and maximize your portfolio’s potential

Anthropic, the San Francisco-based startup behind the Claude coding tool, said DeepSeek, Moonshot, and MiniMax (HK:0100) used fake accounts to extract data from its system. The company said it found 24,000 fraudulent accounts that generated more than 16 million exchanges with Claude. It claims those exchanges were used to “train and improve their own models.”

Distillation is a common AI method. It means training a smaller model on the answers of a more advanced one. That lets a company copy some of the performance of a top system without spending as much on chips and power. However, Anthropic argues that in this case, the practice crossed a line.

“Distillation attacks undermine those controls by allowing foreign labs… to close the competitive advantage that export controls are designed to preserve,” the company said.

Why This Matters for Chip Controls

The timing is key. The U.S. has placed limits on exports of advanced chips to China, including high-end products from Nvidia Corporation NVDA +0.91% ▲ . These curbs are meant to slow China’s progress in advanced AI. As a result, Chinese labs have had to find other ways to keep up. Some have trained models overseas. Others have relied on older chips. And now, distillation is in the spotlight.

If Chinese firms can use outputs from U.S. models to train their own systems, that may reduce the impact of chip limits. In simple terms, even if China cannot buy the best chips, it may still learn from models built with those chips.

That risk has already been raised before. Last year, OpenAI said it found signs of distillation tied to its ChatGPT model. More recently, it told a U.S. House panel that DeepSeek’s upcoming model should be seen in light of efforts to “freeride on the capabilities developed by OpenAI and other U.S. frontier labs.”

Safety and Market Impact

Anthropic also raised a safety concern. It said models “built through illicit distillation” are “unlikely to retain” safeguards that prevent misuse, such as for biothreats or cybercrime. In other words, copied models may not carry over the same guardrails.

Nonetheless, the story has several angles. First, it highlights how key advanced chips from Nvidia remain at the center of the AI race. Second, it suggests that AI firms may tighten access to their systems and monitor use more closely. Third, it raises the chance of new U.S. action if officials decide that distillation weakens export policy.

We used TipRanks’ Comparison Tool to compare notable publicly traded companies that employ chatbots similar to ChatGPT, Grok, and Claude. It’s an excellent tool to gain a broader perspective on each stock and the AI industry as a whole.

Disclaimer & DisclosureReport an Issue

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)