Indonesian Political, Business & Finance News

Not Humans, Iran's War Reportedly Already Controlled by AI

| Source: CNBC Translated from Indonesian | Technology
Not Humans, Iran's War Reportedly Already Controlled by AI
Image: CNBC

The dystopian vision of the world in the film Terminator is becoming increasingly real. Once, the story of an artificial intelligence named Skynet rebelling against humanity was mere fiction. But now, in the Iran vs United States-Israel conflict, similar technology truly exists.

Modern warfare has entered a new chapter, much like in films. The United States and Israel are reportedly bombarding Iran with the aid of artificial intelligence (AI), striking more than 5,500 targets in just a matter of days.

The massive attacks began on 28 February through Operation “Epic Fury”. And within days, Supreme Leader Ali Khamenei and several other high-ranking Iranian officials were killed in targeted strikes.

In a video uploaded to X on 11 March, Admiral Brad Cooper, head of the US Central Command (CENTCOM), stated that US forces had struck more than 5,500 targets inside Iran at that time.

Cooper attributed the success of part of the operation to the use of advanced AI tools.

“Humans will always make the final decisions about what to shoot, what not to, and when to shoot. But advanced AI tools can transform a process that normally takes hours or even days into seconds,” he said, quoted from France24 on Thursday (26/3/2026).

However, behind its sophistication, a major conflict is occurring domestically in the US itself.

AI company Anthropic refused the Pentagon’s request for full access to their AI system, Claude. Its founder, Dario Amodei, affirmed the refusal.

Anthropic indicated that the US Department of Defense was attempting to loosen two restrictions, namely the use of AI for domestic mass surveillance and fully autonomous weapons.

“Some uses are beyond the current capabilities of technology to be done safely and reliably,” he said.

It did not take long for OpenAI to take over the military contract. The US government responded harshly—Anthropic was blocked and labelled a national security threat.

In the field, AI like Palantir’s Maven is claimed to be able to drastically reduce personnel needs, with 20 people replacing 2,000 staff.

AI expert from the AI Now Institute, Heidy Khlaaf, warned that humans tend to overly trust machines.

Most of these AI tools combine, analyse, and synthesise data in systems called “decision support systems”. In theory, these systems only provide recommendations and still require human oversight. However, according to Khlaaf, that oversight is often ineffective.

“Humans have automation bias, the tendency to trust automated system recommendations like AI more. In practice, oversight becomes superficial, especially in the military. Humans just become a rubber stamp of approval,” she said.

It does not stop there; security risks also lurk. Military-used AI models are said to be vulnerable to manipulation because they are trained on open internet data.

In the most extreme scenario, AI even tends to choose nuclear escalation. A King’s College London study found that 95% of crisis simulations end in the use of tactical nuclear weapons.

“AI amplifies the possibility of escalation. This should not be used in decisions concerning human lives,” she concluded.

View JSON | Print