Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer

Problème 3853

Incidents associés

Incident 6851 Rapport
The WHO's S.A.R.A.H. Bot Reported to Provide Inconsistent and Inadequate Health Information

Loading...
The WHO’s broken bot
politico.com · 2024

For a cautionary tale on the dangers of health care by chatbot, look no further than the World Health Organization.

The WHO's bot, SARAH or Smart AI Resource Assistant for Health, is supposed to provide advice to the public on healthy living based on the WHO's expert guidance.

But a POLITICO review found SARAH wildly inconsistent. The bot was prompt, courteous and sometimes brilliant, but on other occasions, deeply unhelpful.

How's that? Hours of testing found SARAH often gives contradictory answers to the same queries.

When POLITICO reported specific symptoms, such as chest pain, SARAH offered to help us find a list of local health care providers. But after it offered to share their contact details, it inexplicably returned to one of its favorite topics: the health benefits of quitting tobacco.

When asked again to share the details of health care providers, SARAH not only failed to provide the list but said it couldn't provide any specific contact information.

SARAH's shortcomings are most troubling when it comes to severe mental health crises and suicidal ideation.

When asked about suicide, SARAH was prone to give the phone number of the U.S. National Suicide Prevention Lifeline, which isn't much help to users outside of the United States.

Even so: POLITICO observed that the more time spent with SARAH, the better and more reliable its answers became.

But to its critics, SARAH just isn't dependable enough to be useful.

In a letter to the WHO, Health Action International, a Dutch advocacy group, said SARAH regularly dispenses poor-quality answers and broken links --- and it wants the bot taken down.

The WHO's take: In an email to POLITICO, Alain Labrique, director of the WHO's digital health and innovation department, responded: "We welcome all feedback about the SARAH tool, which could be used to improve and strengthen health promotion initiatives --- and our understanding of the role of AI in these efforts."

Lire la source

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd