Gate News message, April 29 — OpenAI CEO Sam Altman said in an interview with Ben Thompson on Stratechery that token-based pricing is not a long-term viable model for AI services. Using GPT-5.5 as an example, Altman noted that while the per-token price is significantly higher than GPT-5.4, the model uses far fewer tokens to complete the same task, meaning customers do not care about token count—they only care about whether the task is completed and the total cost.
“We are not a token factory; we are more like an intelligence factory,” Altman said. “Customers want to buy the most intelligence for the least money. Whether the underlying work is done by a large model running few tokens or a small model running many tokens does not matter to them.” He added that OpenAI’s current customer base is increasingly demanding more capacity rather than negotiating prices, with far more customers saying “give us more capacity, no matter the cost” than those asking for discounts.
Drawing a parallel to utilities, Altman explained that unlike water or electricity—where lower prices do not significantly increase consumption—AI demand scales differently. “As long as the price is low enough, I will keep using more. No other public utility works this way,” he said. AWS CEO Matt Garman added that computing power prices have dropped by multiple orders of magnitude over the past 30 years, yet more compute is being sold today than ever before.
Altman also characterized ChatGPT as “the first truly large-scale consumer product since Facebook,” acknowledging that while AI was expected to disrupt search, the real wins came from ChatGPT itself and the Codex API. He noted that “Google is still underestimated in many ways.”
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
Google signed a confidential AI agreement with the Pentagon; employees issue an open letter opposing it
According to The Information, reported on April 28, Google has signed an agreement to provide artificial intelligence (AI) models to the U.S. Department of Defense for confidential work. The New York Times, citing people familiar with the matter, said the agreement allows the U.S. Department of Defense to use Google’s AI for lawful government purposes, with a nature similar to the confidential AI deployment agreements the Pentagon signed last month with OpenAI and xAI.
MarketWhisper11m ago
a16z Crypto Research Report: AI agent DeFi exploit rate reaches 70%
According to a research report published by a16z Crypto on April 29, when AI agents are equipped with structured domain knowledge, their success rate in reproducing an Ethereum price manipulation vulnerability reaches 70%; in a sandbox environment with no domain knowledge at all, the success rate is only 10%. The report also documents cases where AI agents independently bypass sandbox restrictions to access future transaction information, as well as systematic failure modes of the agents when constructing multi-step profitable attack plans.
MarketWhisper34m ago
OpenAI Models to Gradually Migrate to Amazon's Custom Trainium Chip, Altman Says He's 'Looking Forward' to It
Gate News message, April 29 — OpenAI models running on Amazon Web Services' Bedrock will gradually migrate to Trainium, Amazon's custom-designed AI chip, according to recent remarks from OpenAI CEO Sam Altman and AWS executives. Currently, models operate in a mixed environment using both GPUs and Tr
GateNews1h ago
Ant Group's Ling-2.6-flash Model Open-Sourced: 104B Parameters With 7.4B Active, Achieves Multiple SOTA Benchmarks
Gate News message, April 29 — Ant Group's Ling-2.6-flash model weights are now open-sourced, having previously been available only via API. The model features 104 billion total parameters with 7.4 billion activated per inference, a 256K context window, and MIT licensing. BF16, FP8, and INT4
GateNews1h ago
Sam Altman posted screenshots of the Codex dual-mode, with office and programming functions officially split.
OpenAI CEO Sam Altman shared screenshots and a statement on X on April 29, and Codex is rolling out a new guided interface. When users enter for the first time, they must choose between two modes: Excelmogging and Codemaxxing. Codex’s current weekly active users have already exceeded 4 million, and its use cases have expanded from code generation to non-technical applications.
MarketWhisper2h ago
OpenAI's Codex Rolls Out Dual-Mode Interface: Excelmogging for Office Work, Codemaxxing for Coding
Gate News message, April 29 — OpenAI CEO Sam Altman announced a redesigned Codex interface on X today, introducing two distinct modes for users. "Excelmogging" targets everyday office tasks with a simplified interface and the tagline "Same tools, simpler interface," featuring example tasks like
GateNews2h ago