Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Breaking news! The three major American AI giants are teaming up in a rare collaboration to combat distillation activities!
Source: Ai & Chip News
Competitors OpenAI, Anthropic PBC, and Google under Alphabet have begun cooperating in an attempt to prevent rivals from extracting results from advanced artificial intelligence (AI) models in the United States, so as to gain an advantage in the global AI race.
According to a report by Bloomberg, insiders said that they are sharing information through the “Frontier Model Forum.”
The “Frontier Model Forum” is an industry non-profit organization jointly established in 2023 by the above three companies and Microsoft, aiming to identify so-called “adversarial distillation” activities that violate service terms.
The report said that this rare collaboration highlights how seriously U.S. AI companies view the relevant issues. These companies are concerned that some users are developing imitation versions of their products, which could compete for customers at lower prices while also creating national security risks.
An anonymous insider said that U.S. officials expect unauthorized distillation activities to cause annual profit losses of billions of dollars for Silicon Valley labs.
Distillation technology first came to widespread attention in January 2025. Earlier, the startup DeepSeek unexpectedly released a reasoning model, R1, sparking a sensation in the AI community. Not long after, Microsoft and OpenAI launched an investigation to determine whether DeepSeek improperly extracted large amounts of data from U.S. company models for developing R1.
On February 23 this year, Anthropic said in a blog post that DeepSeek, KQ Technology, and the Dark Side of the Moon used thousands of fake accounts to interact with its Claude model a total of more than 16 million times, violating service terms.
Anthropic said that by adopting distillation, AI labs can rapidly improve their own model capabilities by training on outputs from more powerful systems.
A flood of information and precise analysis—on the Sina Finance APP
Responsible editor: Zhang Qiaosong