US Security Agency Uses Controversial AI Model Banned by Pentagon
KOMPAS.com – The US National Security Agency (NSA) is reportedly using Anthropic’s “Mythos Preview” artificial intelligence (AI) model. However, this AI model has been labelled “high-risk” by the US Department of Defense. In fact, the Pentagon has banned its use in military environments. This information was reported by the media outlet Axios based on statements from two sources familiar with the practice. This news emerged a few days after Anthropic’s CEO, Dario Amodei, met with White House Chief of Staff Susie Wiles and several other officials, reportedly discussing Mythos. The controversy began in February when Donald Trump ordered all government agencies to stop using Anthropic’s services. That decision was made after the company refused to relax certain restrictions on AI use for military purposes, including mass domestic surveillance and the development of autonomous weapons. The Mythos model itself is part of Anthropic’s Claude AI system, developed as a competitor to ChatGPT and Gemini. This model is introduced as a versatile language model with outstanding capabilities in the field of cyber security. It is these capabilities that are the main source of concern. Based on reports and company statements, Mythos can identify security vulnerabilities in various systems, find old bugs, and potentially perform simulations and executions of large-scale cyber attacks automatically. If misused, these capabilities could accelerate and expand cyber attacks without direct human involvement, making attacks harder to detect than conventional methods. This has triggered concerns not only in the defence sector but also among global financial industries. Several finance ministers and bankers are reportedly highlighting these potential risks. At the International Monetary Fund meeting in Washington DC in mid-April, the Mythos model became one of the main discussion topics.