Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Recently diving deep into the underlying logic of transformers
My deepest understanding is that
what AI considers "correct" is actually an extremely powerful function approximator, a form of ultimate statistical modeling
Based on this, I really believe that some people using the current capabilities of LLMs for quantification and even running live trading are no different from gambling—completely the wrong approach
Because the most proficient task of LLM models is predicting words. They are autoregressive models that can only output what is statistically "correct"
It's like asking them "Will Bitcoin go up?"
The LLM will only generate responses based on the text distribution in its training data, spelling out the most common response patterns in human history one word at a time
And this depends on the output of the first token
For example, candidate responses might be:
Will
Won't
The market is uncertain
First rise then fall
….
The LLM will generate subsequent tokens based on the first token it produced, ultimately creating a lengthy report that it doesn't even know is right or wrong but looks very professional. This is entirely dependent on how the matching context appears in search engines
In reality, quantitative models require order book flow data, various mathematical modeling, multi-factor analysis, and so on
Quantitative large models and LLM large models are completely different. Quant systems simply don't use transformers
Any use of LLM tools for fully automated trading is pure gambling—betting on what context the built-in search engine can piece together and what the first token it generates will be
Predicting markets, contract trading, or other markets like US stocks
Retail traders should stop believing in stories about fully automated AI crypto trading. Recently, I’ve seen too many cases where people connect a skill to openclaw and let it do fully automated trading
It’s not that AI can’t achieve fully automated crypto trading
Like the AI auto-trading system that Aster previously developed, its underlying isn't based on the LLM's own capabilities
They just use the LLM to call quant models—essentially putting a shell over it. The role of the LLM is only to make decisions based on real data. But even that isn’t very reliable