Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Alibaba Tongyi Laboratory Releases VimRAG: Reconstructing Multimodal Retrieval and Reasoning with Memory Graphs
CryptoWorld News reports, ME News reports, on April 10 (UTC+8), Alibaba Tongyi Laboratory (Tongyi Lab) officially launched the new-generation multimodal RAG framework VimRAG, focusing on tackling the long-standing “state blind spots” problem in existing systems. VimRAG upgrades traditional linear history records into a Multimodal Memory Graph, organizing the reasoning process with a dynamic directed acyclic graph (DAG) structure, effectively eliminating redundant retrieval and tracking exploration paths end to end. It introduces Graph-Modulated Visual Memory Encoding, which targets high-load visual data such as images, achieves adaptive Token allocation, and incorporates the GGPO mechanism to enable fine-grained credit assignment and improve the accuracy of reasoning attribution. Based on published evaluation data, VimRAG performs exceptionally well across multiple multimodal benchmark tests such as SlideVQA, MMLongBench, and LVBench, with the Qwen3-VL-8B-Instruct version achieving the leading overall score among comparable solutions. VimRAG aims to move multimodal RAG from “simple retrieval” to “structured reliable reasoning,” providing a stronger system-level solution for handling complex long documents and multimodal mixed scenarios.