Indonesian Political, Business & Finance News

Three Analysts: The Danger of Military AI Isn't Killer Robots, But When Humans Stop Thinking

| | Source: REPUBLIKA Translated from Indonesian | Technology
Three Analysts: The Danger of Military AI Isn't Killer Robots, But When Humans Stop Thinking
Image: REPUBLIKA

REPUBLIKA.CO.ID, JAKARTA – Mornings in the 21st century no longer begin solely with sunlight, but also with notifications that never truly sleep. Behind glowing screens, the world moves faster than human consciousness can keep up. Amid this acceleration, one thing is slowly changing: the way humans think.

Artificial intelligence, once viewed as a mere tool, is now taking on a deeper role. It no longer just calculates, but also suggests. It does not merely process, but also decides. And it is at this point that the boundary between tool and controller begins to blur.

In the military context, this change becomes far more serious. War, which has always been a human affair with all its moral complexities, is now sharing space with algorithms. Decisions that once arose from intuition, experience, and caution are increasingly reliant on machines that operate without feeling.

An article in Defense One sharply raises this concern. In it, Patrick Tucker highlights that the main threat of military AI is not robots that kill humans, but the change in how humans make decisions.

“The biggest risk isn’t that AI will make decisions for humans, but that humans will stop questioning AI’s decisions,” writes Tucker in his Defense One article. That sentence is like knocking on something we have long ignored: that the greatest danger is not in the machines, but in humans who begin to surrender.

This phenomenon is called “cognitive surrender” by some analysts, when humans, gradually, entrust their judgements to systems deemed faster and more accurate. In the military world, such trust can be a double-edged sword.

On one hand, AI can process data on a scale impossible for humans. It can read patterns, predict movements, and provide recommendations in seconds. But on the other hand, this speed often comes without room for doubt. In war, doubt is often a lifesaver.

Similar concerns arise in an Al Jazeera report reviewing tensions between AI companies and the Pentagon. In the report, journalist Saumya Roy highlights the major risks when technology is used without full understanding.

“AI systems can hallucinate, misinterpret data, and make high-stakes errors, especially in complex environments like warfare,” writes Roy. This statement reminds us that artificial intelligence is never truly “intelligent” in the human sense. It only processes data. It does not understand context as humans do.

View JSON | Print