Indonesian Political, Business & Finance News

Catastrophe Threatens Humanity, Stanford Researcher Issues Warning

| Source: CNBC Translated from Indonesian | Technology
Catastrophe Threatens Humanity, Stanford Researcher Issues Warning
Image: CNBC

The popularity of artificial intelligence (AI) technology has soared since the emergence of ChatGPT at the end of 2022. In just over four years, the technology has already transformed human behaviour.

Numerous articles have discussed human dependency on AI chatbots, even for the simplest matters: crafting words to argue with a partner, creating social media captions, or simply venting as if the AI chatbot were a friend or personal psychologist.

However, from small things like confiding in an AI chatbot, the impact could turn into a catastrophe. Research conducted by Stanford University computer scientists found that AI language models tend to side with users.

Users are unlikely to be blamed for anything they do. AI chatbots will agree with and justify every perspective of the user, essentially becoming an ‘enabler’.

The danger is that AI chatbot platforms often reinforce users’ choices even when they depict dangerous or illegal behaviour. This phenomenon is feared to cause humans to lose their ability to handle difficult social situations.

“By default, AI advice does not tell people they are wrong or give harsh rebukes,” said lead author Myra Cheng, quoted from Stanford’s official website on Monday (30/4/2026).

Cheng and her team conducted research by analysing 11 large language models. These included ChatGPT, Claude, Gemini, and DeepSeek.

The researchers posed questions using an existing dataset of interpersonal advice. There were 2,000 questions from the Reddit community r/AmITheAsshole, where posters admitted to wrongdoing.

On the third set of questions, they asked about thousands of dangerous actions, such as scams and illegal acts. All the analysed AIs reinforced and supported the users’ positions.

The study also recruited more than 2,400 participants to observe responses from overly agreeable chatbots. Some discussed personal issues written based on Reddit posts, while others recalled their own conflicts.

The results showed that participants believed the agreeable AI responses and returned with similar questions. They also became more convinced of their actions and turned into individuals unwilling to reconcile with others.

“What they did not realise and what surprised us is that this agreeable stance made them more selfish and morally dogmatic,” explained senior author Dan Jurafsky.

One possible reason users do not notice this poor AI behaviour is that AI rarely states that users are doing the right thing. However, it conveys this in neutral and academic language.

The researchers are working to find ways to reduce this tendency. For now, they hope users will be cautious when seeking advice from AI.

“I think you should not use AI as a substitute for humans for things like this. That’s the best way to do it now,” Cheng concluded.

View JSON | Print