Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2910

Associated Incidents

Incident 4494 Report
Startup Misled Research Participants about GPT-3 Use in Mental Healthcare Support

Loading...
Mental health service criticised for experiment with AI chatbot
newscientist.com · 2023

Since this article was first published, Koko founder Rob Morris clarified some details of the experiement. We have updated the article to reflect this.

A mental health service that allows people to receive encouraging words of support and advice from others has received criticism after announcing it tested AI-generated responses.

Rob Morris, founder of the free mental health service Koko, outlined in a series of Twitter posts how the firm tested using a chatbot to help provide mental health support to about 4000 people. The chatbots were powered by GPT-3, a publicly available AI built by San Francisco-based company OpenAI.

The test enabled users of Koko's online peer support network to enlist a chatbot's help in composing "kind words" as responses to other people's posts.

Morris described Koko users as rating AI-composed messages "significantly higher than those written by humans on their own", but also said that "once people learned the messages were co-created by a machine, it didn't work. Simulated empathy feels weird, empty."

One element of the experiment that has drawn criticism was the process by which recipients found out messages had been composed with the help of the chatbot. Initially, it seemed that there was a period where people were completely unaware, though Morris has since said that wasn't the case and that those messages included a note saying "written in collaboration with Koko Bot".

The experiment "raises significant ethical and moral concerns", says Sarah Myers West at the AI Now Institute, a research centre in New York City.

Multiple researchers, tech developers and journalists responded on Twitter by describing the demonstration as unethical, citing issues around informed consent and the failure to first run the experiment by an institutional review board (IRB) – a group specifically tasked with protecting the welfare of research subjects. Morris says the experiment was exempt from informed consent.

On its website, Koko says over 2 million people – most of them adolescents – have used its mental health support services.

There are many examples of people knowingly consulting chatbots for online advice and support, including the early example of computer scientist Joseph Weizenbaum's ELIZA that was developed in 1964. But this particular experiment "is deserving of every bit of the close scrutiny it's currently getting", says West.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd