AI Tool “Claude Mythos” Restricted from Public Due to Overpowering Abilities

Anthropic, a California-based company, has decided to keep its advanced AI tool, Claude Mythos, from public access due to safety concerns regarding its capabilities. This powerful AI, revealed in March 2023, has identified thousands of software vulnerabilities, including some that have remained undetected for decades. Only about 40 select organizations, including tech giants like Apple, Amazon, and Google, will have access to this tool to enhance cybersecurity.
Claude Mythos: An Overview
Claude Mythos is an advanced iteration of the Claude AI language model, which was first introduced by Anthropic in March 2023. Since its launch, Claude has reached version 4.6 and is known for its superior coding efficiency compared to other similar models, such as OpenAI’s ChatGPT.
Investments and Development
Anthropic received approximately $10 billion from companies like Amazon and Google. This support comes as OpenAI benefited from a $13 billion investment from Microsoft. In February 2023, the company launched a specialized version named Claude Code before an inadvertent disclosure of Claude Mythos occurred on March 26.
Project Glasswing
On April 7, Anthropic introduced Project Glasswing. This initiative allows select “launch partners” to utilize Claude Mythos to protect their IT infrastructures. Major corporations such as Amazon Web Services, Microsoft, and others will participate, with Anthropic providing $100 million in usage credits to support these efforts.
Unique Restrictions on AI Access
This decision to restrict public access to Claude Mythos marks a noteworthy moment in the tech industry. While some reviews suggest this could be unprecedented, it mirrors OpenAI’s previous delays in launching ChatGPT due to concerns around misuse. Notably, Google’s Sundar Pichai acknowledged delays in launching their AI, Gemini, citing similar concerns about its reliability.
Ethical Considerations
Anthropic has carved out a reputation focused on ethical AI development. They publicly established guidelines to curb misuse, notably banning the Pentagon from using their AI for mass domestic surveillance and the development of fully autonomous weapons. Such ethical stances set Anthropic apart in a rapidly evolving industry.
Safety and Effectiveness of Claude Mythos
While few experts have tested Claude Mythos, reports from Anthropic indicate that it has achieved unprecedented programming skills. The AI reportedly surpasses all but the most experienced human coders in identifying and exploiting software flaws. It has discovered unknown vulnerabilities, including:
- Twenty-seven-year-old vulnerabilities in OpenBSD, known for its strong security.
- A sixteen-year-old flaw in the FFmpeg codec.
- One hundred eighty-one exploitation methods for Firefox vulnerabilities.
All identified vulnerabilities have been reported to the original developers prior to any public release.
Future Implications
As Claude Mythos remains restricted, Project Glasswing signals a potential shift in how the tech industry collaborates on cybersecurity. Logan Graham of Anthropic emphasized the initiative as a pivotal moment for the sector. With an anticipated valuation of $380 billion by late 2026, Anthropic is poised for significant development and influence in the AI and cybersecurity domains.




