Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google releases the seventh-generation Ironwood TPU Developer Training Guide, detailing system-level performance optimization
ME News Report, April 2 (UTC+8), Google officially released a developer training guide for the seventh-generation Ironwood TPU. The guide aims to help developers fully leverage the system-level performance of the Ironwood TPU for efficient training and deployment of cutting-edge AI models. The Ironwood TPU is a customized AI infrastructure designed to meet the computational demands of trillion-parameter models. It builds a complete system supporting up to 9,216 chips through technologies such as inter-chip connectivity (ICI), optical circuit switches (OCS), data center networks (DCN), and high-bandwidth memory (HBM). The article details several key optimization strategies for this hardware, including: utilizing its matrix multiplication unit (MXU) to natively support FP8 training for increased throughput; adopting the TPU-optimized JAX kernel library Tokamax to handle irregular tensors in long-context and mixture-of-experts models through “Spill Attention” and “Megablox Grouped Matrix Multiplication”; offloading collective communication operations to the fourth-generation SparseCore to hide latency; fine-tuning the allocation of TPU’s fast on-chip SRAM (VMEM) to reduce memory stalls; and selecting optimal sharding strategies (such as FSDP, TP, EP) based on model size, architecture, and sequence length. (Source: InFoQ)