OpenAI CEO Sam Altman has accused rival Anthropic of using “fear-based marketing” to promote its Claude Mythos AI model, according to comments made on the Core Memory podcast hosted by tech journalist Ashlee Vance. Altman argued that the fear-based rhetoric is designed to justify keeping advanced AI systems under the control of a “smaller group of people,” though he acknowledged that some safety concerns are legitimate.
Altman stated that while there are valid concerns about AI safety, “it is clearly incredible marketing to say: ‘We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100 million. You need it to run across all your stuff, but only if we pick you as a customer.’” He noted that it was “not always easy” to balance AI’s new capabilities with the belief that the technology should be accessible.
Altman acknowledged that “there are going to be legitimate safety concerns” but suggested that fear-based messaging may be weaponized to justify centralized control. He stated: “if what you want is like ‘we need control of AI, just us, because we’re the trustworthy people’, I think fear-based marketing is probably the most effective way to justify that.”
Anthropics Claude Mythos model was revealed last month and has drawn significant attention from researchers, governments, and the cybersecurity industry. According to testing, the model can autonomously identify software vulnerabilities and execute complex cyber operations. During testing, Mythos identified hundreds of vulnerabilities in Mozilla’s Firefox browser and has demonstrated the ability to carry out multi-stage cyberattack simulations.
Anthropic has restricted access to the system through Project Glasswing, a limited program granting select companies—including Amazon, Apple, and Microsoft—the ability to test its capabilities. The company has also committed significant resources to supporting open-source security efforts, arguing that defenders should benefit from the technology before it becomes more widely available.
Anthropic has framed Mythos’ capabilities as both a defensive breakthrough—allowing faster detection of critical software flaws—and a potential offensive risk if misused. The model has also exposed limitations in existing AI evaluation systems, with Anthropic acknowledging that many current cybersecurity benchmarks are no longer sufficient to measure the capabilities of its latest system.
Despite calls within parts of the U.S. government to halt use of the technology over concerns about its potential applications in warfare and surveillance, the National Security Agency has reportedly begun testing a preview version of the model on classified networks. On prediction market Myriad, users put a 49% chance on Claude Mythos being released to the wider public by June 30.
A group of researchers claimed last week they were able to reproduce Mythos’ findings using publicly available models.
Altman suggested that rhetoric around highly dangerous AI systems may increase as capabilities improve, but argued that not all such claims should be taken at face value. He stated: “There will be a lot more rhetoric about models that are too dangerous to release. There will also be very dangerous models that will have to be released in different ways. I’m sure Mythos is a great model for cybersecurity but I think we have a plan we feel good about for how we put this kind of capability out into the world.”
Altman also dismissed suggestions that OpenAI is scaling back its infrastructure spending, saying the company would continue expanding its computing capacity. He noted: “I don’t know where that’s coming from… people really want to write the story of pulling back. But very soon it will be again, like, ‘OpenAI is so reckless. How can they be spending this crazy amount?’”