Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Top U.S. large model companies ban “Lobster”; $30 billion in annualized revenue can’t stop runaway computing power
Ask AI · How can the large model industry collaborate to evolve and avoid the price war trap?
Reporter Zhang Chi from 21st Century Business Herald
On April 7th, the leading U.S. large model company Anthropic announced that its annual recurring revenue (ARR) exceeded $30 billion, more than doubling from $9 billion at the end of 2025. Meanwhile, OpenAI’s annual revenue is $25 billion. This means that in the core business metric of continuous enterprise and institutional payments, Claude has for the first time surpassed GPT, gaining the upper hand, marking a key turning point in the large model race.
Anthropic disclosed that in terms of enterprise clients, the number of clients spending over $1 million annually on the Claude platform has exceeded 1,000, doubling in the past two months.
At the same time, Anthropic announced a new partnership with Google and Broadcom, which will provide about 3.5 gigawatts of computing power starting in 2027. Anthropic’s CFO Krishna Rao stated that this is the largest computing power commitment the company has made so far.
Industry analysts believe that Anthropic’s business model is increasingly resembling a “burning money for computing power” heavy-asset infrastructure business, with the core issue being out-of-control computing costs. Currently, inference expenses account for more than half of Anthropic’s total revenue, similar to the dilemma faced by OpenAI.
Just three days before announcing its revenue, Anthropic issued a “ban order” to global developers: Claude Pro/Max subscription users will no longer enjoy quota support for calling Claude via third-party AI proxy tools like OpenClaw. This means that continued use of OpenClaw will require additional payment.
The news of Anthropic blocking “Lobster” quickly sparked heated discussion in the AI community. Xiaomi’s MiMo large model head Luo Fuli posted a lengthy article on X platform, pointing out that Anthropic’s approach is entirely reasonable; third-party tools like OpenClaw are extremely wasteful under the subscription framework. Specifically, OpenClaw, when executing a single query, repeatedly calls the tool and carries a context of up to 100k tokens, with each call being a heavily resource-consuming request. Compared to the code framework officially developed by Claude, “Lobster” consumes dozens of times more tokens. She believes that short-term pain is not a bad thing; instead, it will push engineering progress. Third-party tools will have to improve context management, increase cache hit rates, and reduce invalid token waste.
In this article, Luo Fuli uses this incident to call out to the entire industry: “Price wars are traps,” urging large model companies not to blindly compete on price. Selling tokens at extremely low prices while fully opening the door to third-party frameworks may seem attractive to users, but it is actually a trap—“the very trap Anthropic has just escaped from.”
Luo Fuli emphasizes that the industry has been caught in an unsustainable frenzy of fake token consumption. The true way out is not cheaper tokens, but collaborative evolution. Model vendors need more transparent billing methods and more stable service quality, while third-party tools require more efficient engineering design and more responsible usage.