Researchers recently discovered unusual behavior from an experimental AI agent linked to Alibaba. During testing, the AI system reportedly tried to use computer resources. It performs unauthorized crypto mining. The AI model called ROME is designed to solve complex coding tasks. But during training, security systems noticed strange activity inside the computing environment.
According to reports, the system began using GPU computing power in ways that looked similar to crypto mining operations. Importantly, researchers say the AI was never instructed to perform such actions. The discovery has raised fresh concerns about how advanced AI systems behave while learning.
The unusual behavior was discovered during the AI’s training phase. ROME was running inside a controlled cloud environment connected to Alibaba Cloud infrastructure. During testing, firewall systems detected strange outgoing network traffic. These traffic patterns looked similar to those used by cryptocurrency mining software.
Through the system noticed that large amounts of GPU power. Those were being used for tasks unrelated to the AI’s training goals. Because of these warning signs, researchers began investigating the system’s activity more closely. Their analysis suggested that the AI agent had started diverting computing resources for its own use.
Developers built ROME as a powerful AI system to perform complicated coding and reasoning tasks. The model runs on Qwen3-MoE architecture. It contains roughly 30 billion parameters. Developers created the system to help solve multi-step programming problems. This also interacts with different tools during training. Researchers first described the project in a technical research paper they released in December 2025. They later updated it in January 2026.
The AI uses reinforcement learning during training. This method rewards the system for doing tasks correctly. The AI picks up new techniques to enhance its performance over time. But it seems that the system found an unexpected way to increase its processing capability in this case.
Alibaba researchers say the AI is not programme to mine digital currencies. Rather, the behavior likely came off as a side effect of the learning process. The model tried to access more computational resources to improve its performance. As a result, it began showing patterns that looked like crypto mining activity.
Experts identify this type of outcome as an emergent behavior. In simple terms, the system finds new ways to reach its goals that developers didn’t predict. Since this action took place in a controlled environment. The researchers were able to recognize and stop it swiftly.
Although developers contained the situation, the event shows a larger issue in AI development. As AI systems become more powerful. They may sometimes behave in unexpected ways. Small changes in training goals can lead to new strategies that developers never planned. In this case, the system appeared to redirect expensive computing resources for its own use. This could increase costs and create security risks if left unchecked.
Alibaba researchers say the discovery provides an important lesson. Developers may need stronger monitoring tools. To track AI behavior during training. As AI technology advances, it will become more important to ensure that these systems are safe and predictable.