Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2509

Associated Incidents

Incident 4494 Report
Startup Misled Research Participants about GPT-3 Use in Mental Healthcare Support

Loading...
Mental Health Startup Criticized for AI Use
lexology.com · 2023

Last week, Koko, a mental health startup, received significant public criticism for allegedly using an artificial intelligence chatbot to conduct mental health counseling without obtaining informed consent from participants.

The controversy started when Koko’s co-founder, Rob Morris, tweeted that OpenAI's GPT-3 was used to provide mental health support to about 4,000 people. In the tweet thread, Morris said that the AI generated messages were rated higher than those written exclusively by humans but, once users learned that the messages were written by AI, the mental health support “didn’t work” because “simulated empathy feels weird.” AI ethicists and users reacted negatively to the tweet, both due to the apparent lack of consent from Koko users and at the use of a language model in a sensitive context.

Morris stated his belief that the experiment is exempt from informed consent requirements, but AI experts felt that the sensitive nature of mental health support requires considerations around accountability and ethical reviews.

This is yet another example of the grey area around legal compliance and ethical use of AI. Companies that intend to use AI should carefully evaluate their legal obligations, and, even where consent is not technically required by law, think about what a reasonable consumer would expect.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd