
Mozilla announced on Tuesday that an early version of Anthropic’s Claude Mythos AI model, during internal testing, identified 271 security vulnerabilities in the Firefox browser, and all of them were patched within this week. While Mozilla said it was also surprised by the findings, it noted that the results suggest a fundamental shift may be underway in the cybersecurity landscape, and that defenders may be about to shrink attackers’ advantage—one that they have held for years.
Mozilla previously tested another Anthropic model that, in an earlier version of Firefox, identified 22 security-sensitive vulnerabilities. The discovery of 271 vulnerabilities this time represents a major jump in scale.
Mozilla emphasized that all vulnerabilities found by the system could be found even by “top human researchers,” and that AI tools have not yet revealed entirely new categories of vulnerabilities that humans can’t understand. Its core advantage is that it greatly speeds up this process, enabling developers to quickly identify issues before attackers can exploit them.
Claude Mythos was released in March 2026. It is Anthropic’s most advanced model to date, and company internal materials describe it as a new model that goes beyond the earlier Opus series. In pre-release testing, it found thousands of previously unknown vulnerabilities across major operating systems and web browsers.
Anthropic provides limited access to Claude Mythos through its “Glasswing Program” (Project Glasswing). The organizations currently approved to use it are limited to specific vetted technology companies such as Amazon, Apple, and Microsoft, with use cases restricted to software vulnerability scanning.
The rationale behind this strict control is as follows: testing by a UK AI safety research institute found that Claude Mythos can autonomously carry out complex web operations, including multi-stage enterprise network attack simulations without any human intervention. According to people familiar with the matter, even though the Trump administration had called for a halt to the use of Anthropic’s technology, the U.S. National Security Agency (NSA) has deployed and is running a preview version of Claude Mythos on classified networks.
The results Mozilla found have far-reaching implications on both sides. Security researchers warn that AI systems that can analyze code at scale can automatically identify exploitable vulnerabilities in widely used software. If it falls into the hands of bad actors, it will create an unprecedented cybersecurity threat for software companies and users—and may even give rise to a new generation of automated cyberattack forms.
According to Mozilla, these are real security-sensitive vulnerabilities that “even top human researchers” can find. Mozilla said AI tools have not yet revealed entirely new categories of vulnerabilities that humans can’t understand. However, their advantage lies in how far faster they can conduct large-scale systematic scanning than manual review, and all issues have been fully fixed within this week.
The Glasswing Program is Anthropic’s controlled-access program. Currently, only a limited number of vetted technology companies such as Amazon, Apple, and Microsoft are allowed to use Claude Mythos for limited purposes, with use restricted to software security vulnerability scanning. This restriction reflects Anthropic’s high level of caution about the dual-use risks of the model.
Mozilla said the emergence of AI tools may give defenders, for the first time, an opportunity to shrink attackers’ long-held advantage and achieve “decisive victory.” However, researchers also warn that the same capabilities can be used by attackers as well, accelerating the scale and efficiency of automated cyberattacks. Therefore, controlling access to AI security tools is crucial.
Related Articles
Microsoft Unveils AI Agent Commerce Infrastructure: Publisher Marketplace, Merchant Protocols, and Ad Tools
NeoCognition Raises $40M in Seed Funding for On-the-Job Learning AI Agents
Vitalik: Post-Quantum Cryptography Solutions Are Mature; Ethereum Aims to Resist Both Quantum and AI Threats
Sam Altman Details Failed Negotiations with Elon Musk Over OpenAI Control, Lawsuit Set for April 27
OpenAI's GPT-5.4 Pro Solves New Erdős Problem; Brockman Teases Writing Model Improvements
Sam Altman Responds to Home Attacks in Podcast, Predicts More Similar Incidents