Gate News message, April 22 — Google Cloud announced the release of its eighth-generation custom-built TPU (Tensor Processing Unit) chips on April 22. The new lineup includes TPU 8t, designed specifically for AI training tasks, and TPU 8i, optimized for AI inference workloads. Both chips will become available later this year. Google also unveiled new tools for building AI agents and announced a $750 million fund to drive enterprise AI adoption.
TPU 8t delivers 2.8 times the performance of Google’s previous-generation Ironwood TPU at the same price point. TPU 8i improves performance by 80% over its predecessor and incorporates static random-access memory (SRAM) architecture to deliver “cost-effective large-scale throughput and low latency, enabling millions of agents to run simultaneously,” according to CEO Sundar Pichai. Compared to Ironwood, both TPU 8t and TPU 8i achieve more than double the performance-per-watt efficiency, with TPU 8t improving by 124% and TPU 8i by 117%. Google optimized power efficiency across the entire technology stack and integrated dynamic power management systems that adjust consumption based on real-time demand.
Google’s first-party models now process over 160 billion tokens per minute through direct customer API calls, up from 100 billion last quarter. AI now generates 75% of all new code at Google, compared to 50% in fall of last year. Gemini Enterprise, Google’s enterprise offering, grew 40% quarter-over-quarter in paid monthly active users. The company expects to invest slightly more than half of its machine learning compute budget into cloud services by 2026 to better serve cloud customers and partners. Google is also expanding its collaboration with Broadcom to develop and supply custom TPU chips for future generations, as major tech firms seek alternatives to expensive and supply-constrained GPUs from NVIDIA and AMD.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
OpenClaw, Hermes, and SillyTavern Confirmed in GLM Coding Plan Support
Zhipu AI PM Li announces OpenClaw, Hermes, and SillyTavern as supported GLM Coding Plan projects; other tools will be evaluated case-by-case. Do not share credentials or use subscriptions as API access; contact support for error 1313.
Zhipu AI product manager Li announced that OpenClaw, Hermes, and SillyTavern are officially supported under the GLM Coding Plan, with other tools evaluated case-by-case. The note cautions against sharing credentials or using subscriptions as API access and directs users with error 1313 to contact support.
GateNews1h ago
Google Cloud CEO: Gemini to Power Apple's Personalized Siri Rollout in 2026
Summary: Gemini will power a personalized Apple Siri in 2026, built on Apple's Foundation Models and Gemini collaboration; Apple tests a chat-like Siri in iOS 27/macOS 27, slated for WWDC 2026.
Abstract: Google Cloud's Gemini is set to power a personalized Apple Siri by 2026, blending Gemini with Apple's Foundation Models under a roughly $1 billion collaboration. Apple is testing a redesigned, chat-like Siri in iOS 27/macOS 27, with a Dynamic Island interface and new features, ahead of a WWDC 2026 unveiling on June 8.
GateNews1h ago
SpaceX $60B Cursor Deal Fuels SBF's Pardon Push as FTX's $200K Stake Now Worth $3B
Gate News message, April 22 — SpaceX announced a major partnership with AI coding startup Cursor today, with an option to acquire the company for $60 billion. The deal has given fresh ammunition to Sam Bankman-Fried (SBF), who is currently incarcerated and pushing for a presidential pardon, as it de
GateNews1h ago
Chegg Stock Crashes 99% as AI Disrupts Edtech Market
Summary: Chegg soared during online-education demand, then AI tools disrupted its model, triggering massive layoffs and a collapse below $2, with broader AI-driven shifts hitting crypto miners and fintech firms.
Abstract: This article examines Chegg's rise as a pandemic-era edtech darling and its ensuing decline amid the rapid adoption of generative AI, which provides quick answers and undercuts Chegg's value proposition. It documents 2025 layoffs and the stock's plunge toward delisting, and frames Chegg's experience within a broader AI disruption reshaping tech and crypto: Bitcoin miners pivot to AI operations, and AI-native strategies redefine competitiveness in fintech and beyond.
CryptoFrontier1h ago
OpenAI Releases Open-Source Privacy Filter Model for PII Detection and Redaction
Abstract: OpenAI's Privacy Filter is an open-source, locally executable model that detects and redacts PII in text. It supports large contexts, identifies many PII categories, and is intended for privacy-preserving workflows such as data preparation, indexing, logging, and moderation.
OpenAI's Privacy Filter is a locally run, open-source model (128k-token context) that detects and redacts PII in text, covering contact, financial, and credential data for privacy workflows.
GateNews2h ago
OpenAI Plans to Deploy 30GW Computing Power by 2030
OpenAI aims for 30GW of computing by 2030 to meet rising AI demands, with 8GW completed of a 10GW 2025 target. The expansion signals a strategy to scale infrastructure for next-generation AI development and deployment.
OpenAI intends to reach 30GW of computing power by 2030 to accommodate growing AI demands, having already completed 8GW of a 10GW target for 2025. The move reflects a strategic expansion of infrastructure to support next-generation AI development and deployment.
GateNews2h ago