Indonesian Political, Business & Finance News

Anthropic AI Found Being Used for Iran Attack, Despite Trump's Recent Ban

| | Source: KOMPAS Translated from Indonesian | Technology
Anthropic AI Found Being Used for Iran Attack, Despite Trump's Recent Ban
Image: KOMPAS

The US military has been reported to be continuing the use of artificial intelligence (AI) technology from Anthropic in its latest attack operations against Iran.

This fact has drawn scrutiny given it emerged only hours after US President Donald Trump officially issued an order banning the technology.

The US military’s dependence on Anthropic’s Claude AI model has come to light amid joint military operations between the US and Israel in the Iranian region. The US Central Command has been identified as one of the organisations using Anthropic’s services in the field.

Despite being deployed in large-scale military operations, Anthropic’s Claude AI is believed not to be used directly as a lethal decision-maker.

The large language model (LLM) Claude is confirmed not to be used to fly drones or determine bombing targets autonomously. Instead, the AI is focused on handling data analysis work behind the scenes. Some of its crucial tasks include analysing intelligence data, instantly translating intercepted enemy communications, and optimising military logistics supply chains.

Claude has proven highly capable at sorting and processing raw data in massive quantities. This analytical capability is considered invaluable by military personnel in the midst of combat.

Claude’s use on the battlefield presents its own irony. Not long ago, US Defence Secretary Pete Hegseth labelled Anthropic as a “national security risk”.

However, the blocking instruction apparently provided a six-month transition period for systems to be phased out gradually.

This grace period is now being fully utilised by the US military to continue using Claude. The Pentagon has reportedly struggled to find equivalent replacement AI, according to reports from the Wall Street Journal.

The company has firmly refused to grant the US government access to its Claude AI model if it is used for autonomous weapons systems and mass surveillance programmes.

Notably, the measure that Anthropic has avoided has been taken by its main competitor, OpenAI.

The company’s CEO, Sam Altman, recently faced criticism after announcing his company’s agreement with the US Department of Defence to deploy the ChatGPT AI model across government secret networks.

View JSON | Print