The National Security Agency (NSA) is reportedly utilizing Mythos Preview, a highly specialized AI model from Anthropic that has been kept away from the general public. This development highlights a striking contradiction in U.S. defense policy: while the Pentagon has officially flagged Anthropic as a potential “supply-chain risk,” intelligence agencies are actively integrating the company’s most advanced tools into their operations.
The Mythos Paradox: Power vs. Safety
Earlier this month, Anthropic introduced Mythos, a frontier model specifically engineered for high-level cybersecurity tasks. However, the company made a rare and significant decision to withhold the model from public release.
The reasoning behind this restriction is rooted in the model’s sheer potency. Anthropic stated that Mythos is so proficient at identifying and executing cyberattacks that making it available to the public could pose a massive security threat. Instead, access has been strictly limited to approximately 40 select organizations.
According to reports from Axios, the NSA is among these undisclosed users. Their primary application for the model involves:
– Scanning digital environments for weaknesses.
– Identifying exploitable vulnerabilities within complex networks.
The U.K.’s AI Security Institute has also confirmed it is among the few entities granted access to the model.
A Growing Friction Between Defense and Tech
The NSA’s adoption of Mythos occurs against a backdrop of intense friction between the Department of Defense (DoD) and Anthropic. The Pentagon recently labeled the AI firm a “supply-chain risk,” a move stemming from a fundamental disagreement over the ethical and operational boundaries of AI.
The dispute reached a boiling point when Anthropic refused to grant Pentagon officials unrestricted access to its models, specifically declining to permit the use of its Claude AI for:
1. Mass domestic surveillance operations.
2. The development of autonomous weapons systems.
This creates a complex landscape for national security: the military is simultaneously arguing in court that these AI tools pose a threat to national safety, while intelligence agencies are relying on them to bolster cyber defenses.
Shifting Political Winds
Despite the formal disputes involving the Pentagon, Anthropic’s relationship with the broader U.S. administration appears to be shifting. Recent high-level meetings suggest a “thawing” of relations between the AI firm and the White House.
Anthropic CEO Dario Amodei recently met with key administration figures, including White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent. The White House has characterized these discussions as “productive,” signaling a potential pivot toward a more collaborative relationship between the government and leading AI developers.
The tension between the Pentagon’s security concerns and the NSA’s operational needs underscores a critical debate: how to harness the immense power of frontier AI for defense without creating new, uncontrollable vulnerabilities.
Conclusion
The NSA’s use of Anthropic’s restricted Mythos model reveals a divide in how the U.S. government views AI—viewing it as a high-risk liability in one context and an essential strategic asset in another. This duality highlights the ongoing struggle to regulate powerful technology that is too potent for the public, yet too vital for national security to ignore.
