Incident 449: Startup Misled Research Participants about GPT-3 Use in Mental Healthcare Support
Description: OpenAI's GPT-3 was deployed by a mental health startup without ethical review to support peer-to-peer mental healthcare, and whose interactions with the help providers were "deceiving" for research participants.
EntitiesView all entities
Alleged: OpenAI developed an AI system deployed by Koko, which harmed research participants and Koko customers.
A mental health nonprofit is under fire for using an AI chatbot as an "experiment" to provide support to people seeking counseling, and for experimenting with the technology on real people.
“We provided mental health support to about 4,000 …
Last week, Koko, a mental health startup, received significant public criticism for allegedly using an artificial intelligence chatbot to conduct mental health counseling without obtaining informed consent from participants.
A mental health platform generated controversy on social media, after one of its execs admitted that the company had used GPT-3, an AI chatbot that can respond to prompts with human-like text, to counsel some of its users.
Rob Morris is the…
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.