Incident 506: ChatGPT Allegedly Produced False Accusation of Sexual Harassment

Description: A lawyer in California asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. The chatbot produced a false story of Professor Jonathan Turley sexually harassing a student on a class trip.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: OpenAI developed and deployed an AI system, which harmed Jonathan Turley.

Incident Stats

Incident ID
506
Report Count
2
Incident Date
2023-03-29
Editors
Sean McGregor
Large Libel Models: ChatGPT-3.5 Erroneously Reporting Supposed Felony Pleas, Complete with Made-Up Media Quotes?
reason.com · 2023

[UPDATE: My apologies for misattributing this at first to ChatGPT-4. I had accessed the OpenAI query portal through a page focusing on ChatGPT-4 (https://openai.com/product/gpt-4) and then clicking on "Try on ChatGPT Plus," which is why I h…

ChatGPT sometimes makes up facts. For one law prof, it went too far.
washingtonpost.com · 2023

One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.