Gate News message, April 28 — Meituan has quietly rolled out a new AI model, LongCat-2.0-Preview, on its LongCat API platform with an update log dated April 20, but has not issued any official announcement or technical report. Unlike previous LongCat series models (Flash-Chat, Flash-Thinking, Flash-Lite, Flash-Omni, Next), which shipped with official blog posts, technical reports, and open-source releases on Hugging Face and GitHub, the 2.0-Preview version offers no open-source links and is available exclusively via API.
The model’s update log highlights three core capabilities: agent development with native support for tool calling, multi-step reasoning, and long-context tasks; proficiency in code generation, workflow automation, and complex instruction execution; and deep integration with Claude Code, OpenClaw, OpenCode, and Kilo Code. According to reports from multiple media outlets citing sources on April 24, the model’s total parameters exceed one trillion, employs a MoE (Mixture of Experts) architecture, and supports a 1 million-token context window—comparable in scale to DeepSeek V4, also released that day.
Insiders revealed that LongCat-2.0-Preview was trained entirely on domestic computing clusters using between 50,000 and 60,000 Chinese-made accelerator cards, marking the largest-scale training task completed on domestic AI infrastructure to date. During the testing phase, the model provides a free daily allowance of 10 million tokens.
相關文章
China's MIIT Approves 690 Industry Standards Including AI Deep Synthesis Image System Specification