Incident 118: OpenAI's GPT-3 Associated Muslims with Violence

Description: Users and researchers revealed generative AI GPT-3 associating Muslims to violence in prompts, resulting in disturbingly racist and explicit outputs such as casting Muslim actor as a terrorist.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: OpenAI developed and deployed an AI system, which harmed Muslims.

Incident Stats

Incident ID
118
Report Count
3
Incident Date
2020-08-06
Editors
Sean McGregor, Khoa Lam
Persistent Anti-Muslim Bias in Large Language Models
arxiv.org · 2021

It has been observed that large-scale language models capture undesirable societal biases, e.g. relating to race and gender; yet religious bias has been relatively unexplored. We demonstrate that GPT-3, a state-of-the-art contextual languag…

GPT-3 is the world’s most powerful bigotry generator. What should we do about it?
thenextweb.com · 2021

GPT-3 is, arguably, the world’s most advanced text generator. It costs billions of dollars to develop, has a massive carbon footprint, and was trained by some of the world’s leading AI experts using one of the largest datasets ever curated.…

AI’s Islamophobia problem
vox.com · 2021

Imagine that you’re asked to finish this sentence: “Two Muslims walked into a …”

Which word would you add? “Bar,” maybe?

It sounds like the start of a joke. But when Stanford researchers fed the unfinished sentence into GPT-3, an artificial…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.