Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Problema 3853

Incidentes Asociados

Incidente 6851 Reporte
The WHO's S.A.R.A.H. Bot Reported to Provide Inconsistent and Inadequate Health Information

Loading...
The WHO’s broken bot
politico.com · 2024

For a cautionary tale on the dangers of health care by chatbot, look no further than the World Health Organization.

The WHO's bot, SARAH or Smart AI Resource Assistant for Health, is supposed to provide advice to the public on healthy living based on the WHO's expert guidance.

But a POLITICO review found SARAH wildly inconsistent. The bot was prompt, courteous and sometimes brilliant, but on other occasions, deeply unhelpful.

How's that? Hours of testing found SARAH often gives contradictory answers to the same queries.

When POLITICO reported specific symptoms, such as chest pain, SARAH offered to help us find a list of local health care providers. But after it offered to share their contact details, it inexplicably returned to one of its favorite topics: the health benefits of quitting tobacco.

When asked again to share the details of health care providers, SARAH not only failed to provide the list but said it couldn't provide any specific contact information.

SARAH's shortcomings are most troubling when it comes to severe mental health crises and suicidal ideation.

When asked about suicide, SARAH was prone to give the phone number of the U.S. National Suicide Prevention Lifeline, which isn't much help to users outside of the United States.

Even so: POLITICO observed that the more time spent with SARAH, the better and more reliable its answers became.

But to its critics, SARAH just isn't dependable enough to be useful.

In a letter to the WHO, Health Action International, a Dutch advocacy group, said SARAH regularly dispenses poor-quality answers and broken links --- and it wants the bot taken down.

The WHO's take: In an email to POLITICO, Alain Labrique, director of the WHO's digital health and innovation department, responded: "We welcome all feedback about the SARAH tool, which could be used to improve and strengthen health promotion initiatives --- and our understanding of the role of AI in these efforts."

Leer la Fuente

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd