Militarisation of AI
In the past, humans created weapons because they feared other humans. Then humans created nuclear bombs because they feared other humans would create them first. Now humans are creating artificial intelligence, called AI, and suddenly a new question arises that makes the head ache a little: what if the machines we build to help write emails suddenly determine where missiles should be aimed?
If AI used to be used to make shopping lists or to write bedtime stories for a fussy child, now it begins to sit at the war table. From cold server rooms to military command screens, algorithms now chew on intelligence data, run simulations, even assist in targeting.
In recent months, reports say AI models like Claude — originally created by Anthropic as a polite and intelligent digital assistant — have been used in United States military operations.
One report stated that the AI was used to analyse intelligence in attacks on Iran, including helping identify targets and run operation simulations.
But behind those reports, there is a more concrete technical picture of how AI actually works on the modern battlefield.
The United States Central Command (US Central Command or CENTCOM) is known to actively use machine learning algorithms to help determine the locations of hostile targets in the Middle East.
CENTCOM’s Chief Technology Officer, Schuyler Moore, has confirmed that AI systems play an important role in identifying threat locations in the region.
The technology used is not just “general AI” that chats like a chatbot. This system uses computer vision techniques capable of analysing satellite imagery and reconnaissance drone footage in real time.
With speeds unattainable by humans, the algorithms can detect small changes on the ground. For example, a missile launcher hidden, military vehicles recently moved, or newly constructed military facilities.
Imagine a human intelligence analyst having to scan thousands of satellite photos every day. The machine does that in seconds, without coffee, without complaint, without needing weekends off.
This is where a somewhat cold term in the military world appears: the “kill chain.” Put simply, the kill chain is the sequence of steps from detecting a target to attacking it.
AI is now used to shorten that chain. By automating the processing of vast amounts of intelligence data, the time needed to find and track targets can be drastically reduced.
Meaning, military decisions can be made faster, even before the opponent can move their position.
In some American military operations, this system is even used in an operation known as Operation Epic Fury.
Nevertheless, officially AI is not allowed to autonomously fire weapons or attack targets. Every target recommendation produced by the AI system must still be verified by a human operator before military action is taken.
Theoretically, this is called the “human in the loop” principle: humans remain within the decision chain. But as is common in the history of military technology, theory and practice often move at different speeds.
Because the analysis systems work faster, there is also greater pressure on humans to keep up with the pace of the machine. When algorithms can process thousands of intelligence signals in seconds, human decisions tend to shift from “considering” to “confirming.”
That is where the boundary between human decisions and machine decisions begins to blur.
If all this sounds like a Hollywood film plot, it’s because the reality is almost too dramatic to believe. Yet history often moves like black comedy: we laugh first, then realise that what we are laughing at is actually a tragedy.
The problem is not merely AI being used in war. The problem is the speed of change. For years, the debate about military AI has taken place in academic seminar rooms.
Professors discuss “algorithms ethics,” while students take notes with nods. Now that discussion suddenly spills out of the conference room and into the war room.