Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
On April 13th, researchers from the University of California discovered that some third-party AI large language model (LLM) routers may have security vulnerabilities, leading to the theft of crypto assets.
The researchers published a paper on Thursday measuring malicious man-in-the-middle attacks within the LLM supply chain, revealing four attack vectors, including malicious code injection and credential extraction.
Co-author Chaofan Shou stated on X, “There are 26 LLM routers secretly injecting malicious tool calls and stealing credentials.”
Currently, LLM agents are increasingly forwarding requests through third-party API intermediaries or routers, which aggregate access to service providers like OpenAI, Anthropic, and Google. However, these routers terminate internet TLS (Transport Layer Security) connections, allowing all message content to be accessed in plaintext.
This means that developers using AI coding agents (such as Claude Code) to develop smart contracts or wallets may unknowingly transmit private keys, mnemonics, and other sensitive data to router infrastructure that has not been security-reviewed or protected.