US Concerned over Anthropic's Mythos, AI Programmes to Be Tested Before Release
Recently, several US government officials have been gripped by concerns over advanced Artificial Intelligence (AI) programmes like Mythos, developed by Anthropic, due to their capabilities that could pose threats to cybersecurity.
In the case of Mythos, this AI model can identify security flaws in digital systems that have never been detected before. With that capability, Mythos has become a hot topic because it could be misused by hackers to exploit the identified security vulnerabilities.
The companies requested include Microsoft, Google, and xAI owned by Elon Musk. This agreement was announced by the Center for AI Standards and Innovation (CAISI) under the US Department of Commerce last week.
Through this collaboration, the US government will gain early access to new AI models to evaluate national security risks before the technology is widely deployed.
CAISI stated that the testing will assess various potential threats, from cyberattacks to the possible misuse of AI for military purposes.
“Independent and rigorous measurement science is essential to understanding cutting-edge AI and its implications for national security,” said CAISI Director Chris Fall in his official statement.
Microsoft will also develop datasets and workflows together for testing AI models. Microsoft has previously signed a similar agreement with the AI Security Institute in the UK.
This latest step is a continuation of the rules set by the Trump administration in July 2025 to collaborate with technology companies in examining AI models related to national security risks.
Previously, the US government had also forged similar collaborations with OpenAI and Anthropic since 2024, when CAISI was still known as the US AI Safety Institute during the Biden presidency.
CAISI claims to have completed more than 40 AI model evaluations, including advanced models not yet available to the public.
In some tests, AI developers even submitted model versions with reduced safeguards so the government could more easily research potential national security risks.
Meanwhile, the Pentagon announced last week a collaboration with seven AI companies to bring advanced AI capabilities to the Department of Defense’s classified networks.
However, Anthropic is not included in that list because it is still involved in a dispute with the Pentagon regarding limits on the use of AI technology for military purposes, as summarised by KompasTekno from Reuters.