Tensions between OpenAI and Anthropic have resurfaced following fresh remarks from OpenAI CEO Sam Altman, who criticised his rival’s latest cybersecurity-focused artificial intelligence model during a podcast appearance this week.
Altman said Anthropic appeared to be using heightened warnings about potential risks to position its new model more strongly in the market than its capabilities warrant, suggesting the messaging leans on fear rather than technical substance.
The comments come shortly after Anthropic introduced “Mythos” earlier this month, rolling out the model to a limited group of enterprise customers.
The company has described Mythos as highly advanced, arguing that its capabilities are significant enough to justify restricting public access due to concerns it could be exploited by cybercriminals.
During an appearance on the Core Memory podcast, Altman suggested that Anthropic’s messaging around risk amounted to “fear-based marketing,” implying it could help concentrate advanced AI capabilities in the hands of a small, select group of users.
“It is clearly incredible marketing to say, ‘We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,’” he added.
Fear-based marketing is not unique to Anthropic.
Critics argue that much of the broader AI industry has at times relied on alarmist framing and exaggerated claims to underscore the power and potential risks of its systems.
