WHO's AI Chatbot Under Fire for Giving People Wrong Medical, Healthcare Answers

The World Health Organization's medical AI assistant, SARAH, is now under fire after generating outdated and even inaccurate answers to medical and healthcare-related questions.

The chatbot, which runs on OpenAI's GPT 3.5 model, was found providing incorrect WHO data regarding certain illnesses or stating that some drugs are still under clinical testing when it is already approved by the Food and Drug Administration.

WHO's AI Chatbot Under Fire for Giving People Wrong Medical, Healthcare Answers

(Photo : Fabrice Coffrini/AFP via Getty Images)

It does not help that the AI also trips up in giving the nearest locations to medical facilities, either providing unrelated answers or being unable to give a response altogether.

Bloomberg first reported the issues with the WHO's chatbot.

WHO: SARAH is Still a Prototype

The WHO has earlier warned users that SARAH's answers "may not always be accurate because they are based on patterns and probabilities in the available data."

The organization has already prevented the AI from providing healthcare advice to users as the chatbot is still a prototype and instead tells people to "consult with your healthcare provider."

SARAH was developed in partnership with AI developer Soul Machines Limited as a way to educate users and provide assistance to medical practitioners amid healthcare worker shortages in many countries and hospitals.

Also Read: New York City's AI Chatbot Under Fire for Giving Business Advice to Break Law

AI Chatbots Still Spreading Misinformation Despite Promises

While the issues of "AI hallucinations" SARAH has are not as severe, the situation depicts continued concerns about AI becoming available to the public despite unresolved problems on its systems.

A Politico article earlier reported of Google and OpenAI chatbots of still generating misinformation about certain questions around the upcoming European Union elections.

The report comes in after the tech companies have already pledged to improve guardrails on their technology to prevent it from generating inaccurate information regarding politics and social issues.

Of course, this is not to mention worries of users abusing loopholes in the technology to intentionally spread disinformation online as evidenced by earlier reports of deepfake and AI-powered "fake news" on social media.

Related Article: Google, OpenAI Chatbots Spreading Misinformation About EU Elections: Politico

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost