Investing.com - Anthropic on Monday accused three Chinese AI laboratories of conducting large-scale data extraction activities on its Claude AI model, claiming these companies used fraudulent accounts to illegally improve their own models.
The AI company stated that DeepSeek, Moonshot, and MiniMax interacted with Claude over 16 million times through approximately 24,000 fraudulent accounts, violating terms of service and regional access restrictions. These allegations came after OpenAI made similar claims, telling House lawmakers that DeepSeek used distillation techniques to enhance its models.
These Chinese labs employed a method called distillation, which uses outputs from more advanced systems to train weaker models. While distillation is a legitimate technique used by AI companies to create smaller versions of their models, Anthropic said that when competitors use it to acquire capabilities from other labs, it becomes illegal.
According to Anthropic, these activities targeted Claude’s most advanced features, including reasoning, tool use, and coding capabilities. The company said it identified these operations through IP address associations, request metadata, and infrastructure indicators.
Reports indicate that DeepSeek generated over 150,000 interactions focused on reasoning abilities and score-based tasks. The operation used synchronized traffic across multiple accounts with similar patterns and shared payment methods. Anthropic stated that DeepSeek’s prompts asked Claude to explicitly explain the internal reasoning behind completed responses, generating chain-of-thought training data. The company also observed some tasks where Claude was used to create alternative responses to politically sensitive queries about dissidents, party leaders, and authoritarianism.
Moonshot AI conducted over 3.4 million interactions related to reasoning, tool use, coding, and computer vision capabilities. This operation used hundreds of fraudulent accounts across multiple access points. Anthropic attributed this activity to the company by matching request metadata with publicly available profiles of Moonshot’s senior staff.
MiniMax generated over 13 million interactions focused on coding and tool use. Anthropic detected this activity while MiniMax was actively releasing its training models. When MiniMax released new models during its operation, the Chinese lab redirected nearly half of its traffic within 24 hours to capture the capabilities of the latest systems.
Anthropic stated that these labs accessed Claude through commercial agent services that resell AI model access. These services operate networks of fraudulent accounts, distributing traffic on Anthropic’s API and third-party cloud platforms. In one case, a single proxy network managed over 20,000 fraudulent accounts simultaneously.
For national security reasons, Anthropic does not offer commercial access to Claude in China. The company said that illegally distilled models lack necessary security safeguards, posing safety risks. Systems built by Anthropic and other US companies prevent actors from using AI to develop biological weapons or conduct malicious cyber activities.
Anthropic said it has developed detection systems and behavioral fingerprinting tools to identify distillation attack patterns in API traffic. The company is sharing technical indicators with other AI labs, cloud service providers, and authorities. It has also strengthened verification for educational accounts and security research projects.
This article was translated with the assistance of artificial intelligence. For more information, please see our Terms of Use.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Anthropic Accuses Chinese AI Laboratory of Stealing Data from Claude
Investing.com - Anthropic on Monday accused three Chinese AI laboratories of conducting large-scale data extraction activities on its Claude AI model, claiming these companies used fraudulent accounts to illegally improve their own models.
The AI company stated that DeepSeek, Moonshot, and MiniMax interacted with Claude over 16 million times through approximately 24,000 fraudulent accounts, violating terms of service and regional access restrictions. These allegations came after OpenAI made similar claims, telling House lawmakers that DeepSeek used distillation techniques to enhance its models.
These Chinese labs employed a method called distillation, which uses outputs from more advanced systems to train weaker models. While distillation is a legitimate technique used by AI companies to create smaller versions of their models, Anthropic said that when competitors use it to acquire capabilities from other labs, it becomes illegal.
According to Anthropic, these activities targeted Claude’s most advanced features, including reasoning, tool use, and coding capabilities. The company said it identified these operations through IP address associations, request metadata, and infrastructure indicators.
Reports indicate that DeepSeek generated over 150,000 interactions focused on reasoning abilities and score-based tasks. The operation used synchronized traffic across multiple accounts with similar patterns and shared payment methods. Anthropic stated that DeepSeek’s prompts asked Claude to explicitly explain the internal reasoning behind completed responses, generating chain-of-thought training data. The company also observed some tasks where Claude was used to create alternative responses to politically sensitive queries about dissidents, party leaders, and authoritarianism.
Moonshot AI conducted over 3.4 million interactions related to reasoning, tool use, coding, and computer vision capabilities. This operation used hundreds of fraudulent accounts across multiple access points. Anthropic attributed this activity to the company by matching request metadata with publicly available profiles of Moonshot’s senior staff.
MiniMax generated over 13 million interactions focused on coding and tool use. Anthropic detected this activity while MiniMax was actively releasing its training models. When MiniMax released new models during its operation, the Chinese lab redirected nearly half of its traffic within 24 hours to capture the capabilities of the latest systems.
Anthropic stated that these labs accessed Claude through commercial agent services that resell AI model access. These services operate networks of fraudulent accounts, distributing traffic on Anthropic’s API and third-party cloud platforms. In one case, a single proxy network managed over 20,000 fraudulent accounts simultaneously.
For national security reasons, Anthropic does not offer commercial access to Claude in China. The company said that illegally distilled models lack necessary security safeguards, posing safety risks. Systems built by Anthropic and other US companies prevent actors from using AI to develop biological weapons or conduct malicious cyber activities.
Anthropic said it has developed detection systems and behavioral fingerprinting tools to identify distillation attack patterns in API traffic. The company is sharing technical indicators with other AI labs, cloud service providers, and authorities. It has also strengthened verification for educational accounts and security research projects.
This article was translated with the assistance of artificial intelligence. For more information, please see our Terms of Use.