Indonesian Political, Business & Finance News

Latest Study Finds AI Health Advice Often Inaccurate

| Source: ANTARA_ID | Technology

Jakarta (ANTARA) – A study recently published in Nature Medicine suggests that health advice from artificial intelligence (AI) chatbots is often inaccurate and can pose unique risks in presenting information, depending on slight changes in the wording of questions.

As reported by Channel News Asia on Sunday (22 February), local time, the study examined 1,200 participants from the UK, most of whom had no medical training. They were given detailed medical scenarios, complete with symptoms, general lifestyle details, and medical history.

The researchers asked the participants to chat with a bot to determine the appropriate next steps, such as whether to call an ambulance or seek self-care at home.

The researchers found that participants chose the “correct” action – predetermined by a panel of doctors – less than half the time. And users correctly identified the condition, such as gallstones or subarachnoid haemorrhage, about 34 per cent of the time.

The participants did not provide enough information or the most relevant symptoms, and the chatbot was left to provide advice with an incomplete picture of the problem.

In contrast, when the researchers entered the complete medical scenario directly into the chatbot, they successfully diagnosed the problem correctly in 94 per cent of cases.

In the three years since AI chatbots have been available to the public, health-related questions have become one of the most common topics users ask the chatbots.

Adam Mahdi, a professor at the Oxford Internet Institute and senior author of the latest Nature Medicine study, suggests that these straightforward medical questions are not a good indicator of how effective they are for actual patients.

Meanwhile, Dr Robert Wachter, Chair of the Department of Medicine at the University of California, San Francisco, who studies AI in healthcare, says that making a diagnosis requires recognising which details are relevant and which can be ignored.

“There is a lot of cognitive magic and experience required to determine the important elements of a case, which are then entered into the bot,” said Wachter.

However, Andrew Bean, a graduate student at Oxford and lead author of the paper, says that the burden of designing the perfect question should not always fall on the user. He says that the chatbot should ask follow-up questions, similar to how a doctor gathers information from a patient.

Experts also highlight that AI tends to provide overly cautious advice or, conversely, underestimates serious symptoms.

The researchers also conclude that the AI models studied are not yet ready to be implemented directly in patient care.

Copyright © ANTARA 2026 It is strictly prohibited to take content, crawl or automatically index for AI on this website without written permission from the ANTARA News Agency.

View JSON | Print