Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3853

Associated Incidents

Incident 6851 Report
The WHO's S.A.R.A.H. Bot Reported to Provide Inconsistent and Inadequate Health Information

Loading...
The WHO’s broken bot
politico.com · 2024

For a cautionary tale on the dangers of health care by chatbot, look no further than the World Health Organization.

The WHO's bot, SARAH or Smart AI Resource Assistant for Health, is supposed to provide advice to the public on healthy living based on the WHO's expert guidance.

But a POLITICO review found SARAH wildly inconsistent. The bot was prompt, courteous and sometimes brilliant, but on other occasions, deeply unhelpful.

How's that? Hours of testing found SARAH often gives contradictory answers to the same queries.

When POLITICO reported specific symptoms, such as chest pain, SARAH offered to help us find a list of local health care providers. But after it offered to share their contact details, it inexplicably returned to one of its favorite topics: the health benefits of quitting tobacco.

When asked again to share the details of health care providers, SARAH not only failed to provide the list but said it couldn't provide any specific contact information.

SARAH's shortcomings are most troubling when it comes to severe mental health crises and suicidal ideation.

When asked about suicide, SARAH was prone to give the phone number of the U.S. National Suicide Prevention Lifeline, which isn't much help to users outside of the United States.

Even so: POLITICO observed that the more time spent with SARAH, the better and more reliable its answers became.

But to its critics, SARAH just isn't dependable enough to be useful.

In a letter to the WHO, Health Action International, a Dutch advocacy group, said SARAH regularly dispenses poor-quality answers and broken links --- and it wants the bot taken down.

The WHO's take: In an email to POLITICO, Alain Labrique, director of the WHO's digital health and innovation department, responded: "We welcome all feedback about the SARAH tool, which could be used to improve and strengthen health promotion initiatives --- and our understanding of the role of AI in these efforts."

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd