National Security Agency Deploys Anthropic Mythos AI Model Amid Ongoing Supply Chain Dispute with Department of Defense

In a development that highlights the complex and often contradictory relationship between the United States government and the leading edge of the artificial intelligence industry, the National Security Agency (NSA) has reportedly begun utilizing Mythos Preview, a highly restricted cybersecurity model developed by Anthropic. The adoption of this technology by the nation’s premier signals intelligence agency comes at a pivotal moment, as Anthropic remains embroiled in a high-stakes legal and regulatory battle with the NSA’s parent organization, the Department of Defense (DoD).
According to reports initially surfaced by Axios, the NSA is leveraging the Mythos model primarily for environmental scanning and the identification of exploitable vulnerabilities within critical digital infrastructure. This deployment marks a significant milestone in the integration of generative AI into national security operations, yet it stands in stark contrast to the Pentagon’s official stance on Anthropic. Just weeks prior to this revelation, the Department of Defense formally designated Anthropic as a "supply chain risk," a label that usually precedes a decoupling of government contracts or a restriction on use. The friction between these two realities—active deployment by the NSA and a "risk" designation by the DoD—underscores the tension between the military’s need for superior technology and the government’s desire for total control over AI infrastructure.
The Mythos Preview: A Dual-Use Powerhouse
Anthropic announced the existence of the Mythos model earlier this month, positioning it as a specialized "frontier model" tailored specifically for the rigorous demands of cybersecurity. Unlike the company’s flagship Claude series, which is available to the general public and designed for broad conversational and analytical tasks, Mythos was built with a more surgical focus. Anthropic has stated that the model possesses advanced capabilities in code analysis, network topology mapping, and vulnerability discovery.
However, the very capabilities that make Mythos a valuable asset for defenders also make it a potent tool for attackers. In an unprecedented move for the company, Anthropic’s leadership decided to withhold Mythos from public release. The firm argued that the model’s proficiency in identifying "zero-day" vulnerabilities and automating the creation of exploit code was too advanced for general distribution, fearing it could be weaponized by rogue states or cybercriminal syndicates.
Consequently, Anthropic restricted access to Mythos to a highly vetted group of approximately 40 organizations worldwide. While the company has publicly named only a dozen of these partners—including several prominent cybersecurity firms and academic institutions—the NSA’s inclusion in this elite circle has now been brought to light. The UK’s AI Security Institute has also confirmed its access to the model, suggesting a coordinated effort among "Five Eyes" intelligence allies to evaluate and utilize the next generation of AI-driven cyber tools.
A Chronology of Conflict and Cooperation
To understand the gravity of the NSA’s use of Mythos, one must look at the timeline of Anthropic’s deteriorating and then suddenly shifting relationship with the U.S. defense establishment over the first half of 2026.
- March 5, 2026: The Department of Defense officially labels Anthropic a "supply chain risk." The designation was reportedly triggered by Anthropic’s refusal to grant Pentagon officials "unrestricted access" to its model weights and internal training data. The Pentagon argued that without such access, it could not guarantee the safety or loyalty of the AI systems being integrated into defense networks.
- March 9, 2026: Anthropic files a lawsuit against the Department of Defense. In its filing, the company argues that the "supply chain risk" designation was not based on technical failures but was instead a retaliatory measure. Anthropic claimed the Pentagon was attempting to coerce the company into removing safety guardrails that prevent its AI from being used for mass domestic surveillance and the development of autonomous lethal weapons systems.
- April 2, 2026: Anthropic announces Mythos, declaring it a "restricted-access" model due to its high potential for offensive misuse.
- April 18, 2026: Reports emerge of a "thawing" in relations between Anthropic and the executive branch. Anthropic CEO Dario Amodei is confirmed to have met with White House Chief of Staff Susie Wiles and Secretary of the Treasury Scott Bessent. The meeting is described by White House insiders as "productive," signaling a potential shift in how the Trump administration intends to manage the AI sector.
- April 19, 2026: Sources reveal that despite the DoD’s "supply chain risk" label, the NSA is actively using Mythos for high-level cybersecurity operations.
The Pentagon’s Dilemma: Security vs. Sovereignty
The central conflict between Anthropic and the Pentagon revolves around the concept of "AI Sovereignty." For the Department of Defense, a "black box" model—where the internal mechanics and decision-making processes are not fully transparent to the government—represents a liability. Pentagon officials have expressed concerns that proprietary AI could contain hidden backdoors or "sleeper agents" that could be activated by foreign adversaries, particularly given the global nature of tech talent and hardware supply chains.
Anthropic, conversely, operates under a "Responsible Scaling Policy." The company, founded by former OpenAI executives with a focus on AI safety and alignment, maintains that giving the military unrestricted access to its core models would invite misuse. Specifically, Anthropic has resisted requests to modify its "Constitutional AI" frameworks to allow for the processing of data harvested through mass surveillance programs or to assist in the targeting logic of drone swarms.
The NSA’s decision to use Mythos despite these disputes suggests a pragmatic "needs-must" approach within the intelligence community. The NSA’s mission is to protect U.S. networks and gain an information advantage over adversaries like China and Russia, both of which are aggressively developing their own AI cyber-capabilities. If Mythos is the most effective tool for finding vulnerabilities before an adversary does, the NSA appears willing to bypass the broader political and legal friction between its parent agency and the developer.

Implications for the AI Industry and National Security
The deployment of Mythos by the NSA carries several long-term implications for the tech industry and the future of warfare.
1. The Normalization of "Tiered Access" Models
Anthropic’s decision to create a "government-and-vetted-only" model sets a precedent for other AI labs. Companies like OpenAI and Google may follow suit, creating highly capable but dangerous models that never see a public release. This creates a new class of "digital munitions"—software that is treated with the same level of export control and secrecy as stealth technology or nuclear secrets.
2. Legal Precedents for Supply Chain Designations
If Anthropic wins its lawsuit against the DoD, it could limit the government’s ability to use "supply chain risk" labels as a tool for political or operational leverage. However, if the DoD prevails, it may force AI companies to choose between total transparency with the government or being barred from the lucrative federal market entirely.
3. The Arms Race in Automated Hacking
The use of Mythos for "vulnerability scanning" is a defensive posture, but the line between defense and offense in cyberspace is notoriously thin. A tool that can find a hole in a firewall to patch it can also be used to find a hole to exploit it. The NSA’s adoption of such tools signals that the era of human-led cyber warfare is rapidly giving way to machine-speed operations, where AI systems autonomously probe and defend networks.
Political Realignments and the Trump Administration
The recent meeting between Dario Amodei and high-ranking officials in the Trump administration suggests that the executive branch may be looking for a middle ground. The administration has historically favored deregulation and "America First" technological dominance. By engaging directly with Anthropic, the White House may be attempting to bypass the Pentagon’s bureaucracy to ensure that the U.S. remains the leader in AI development, even if it means accommodating the ethical constraints of the companies building the tech.
Secretary of the Treasury Scott Bessent’s involvement is particularly noteworthy. It suggests that the administration views AI not just as a military tool, but as a critical economic engine. Ensuring that companies like Anthropic remain viable and domestic, rather than pushing them toward international markets or causing them to collapse under regulatory pressure, is a clear economic priority.
Official Responses and Silent Corridors
As of this writing, the response from the involved parties has been characterized by strategic silence. TechCrunch has reached out to the National Security Agency for a formal statement regarding the operational parameters of their Mythos usage, but the agency has yet to provide a comment. Anthropic has similarly declined to comment on its specific contracts or the nature of its discussions with the NSA, citing the sensitive nature of its cybersecurity work and the ongoing litigation with the Department of Defense.
The UK’s AI Security Institute, while confirming its access to Mythos, emphasized that its role is purely evaluative, aimed at understanding the risks posed by frontier models to national infrastructure. This international validation provides a layer of legitimacy to Anthropic’s claims about the model’s power, even as the company remains a "risk" in the eyes of the Pentagon.
Conclusion: A Fragmented Strategy
The current state of affairs reveals a fragmented U.S. strategy toward artificial intelligence. While one arm of the national security apparatus (the DoD) labels a leading AI firm a threat, another arm (the NSA) integrates that firm’s most advanced technology into its core mission. This paradox reflects the broader struggle of democratic institutions to keep pace with the exponential growth of AI.
As the legal battle between Anthropic and the Pentagon continues, and as the Trump administration attempts to forge a new path for tech policy, the deployment of Mythos stands as a testament to the indispensable nature of high-end AI. In the modern geopolitical landscape, the risk of using a "supply chain risk" may be deemed lower than the risk of being left behind in the AI arms race. For the NSA, the ability to scan and secure the digital environment with Mythos is evidently a capability too vital to be sidelined by a mere designation.




