Incident 262: DALL-E Mini Reportedly Reinforced or Exacerbated Societal Biases in Its Outputs as Gender and Racial Stereotypes

Description: Publicly deployed open-source model DALL-E Mini was acknowledged by its developers and found by its users to have produced images which reinforced racial and gender biases.


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Incident Stats

Incident ID
Report Count
Incident Date
Khoa Lam

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal

Visual Art Generation

Known AI Technology

Transformer, Generative Adversarial Network

Known AI Technical Failure

Distributional Bias

Potential AI Technical Failure

Lack of Explainability, Misinformation Generation Hazard, Dataset Imbalance, Context Misidentification, Data or Labelling Noise

That AI Image Generator Is Spitting Out Some Awfully Racist Stuff · 2022

Everyone's having a grand old time feeding outrageous prompts into the viral DALL-E Mini image generator — but as with all artificial intelligence, it's hard to stamp out the ugly, prejudiced edge cases.

Released by AI artist and programmer…

AI art generator DALL·E mini is spewing awfully racist images from text prompts · 2022

In 2021, AI research laboratory OpenAI invented DALL·E, a neural network trained to generate images from text prompts. With just a few descriptive words, the system (named after both surrealist painter Salvador Dalí and the adorable Pixar r…

DALL-E Mini Is Obsessed With Women in Saris, and No One Knows Why · 2022

The only real limits to DALL-E Mini are the creativity of your own prompts and its uncanny brushwork. The accessible-to-all AI internet image generator can conjure up blurry, twisted, melting approximations of whatever scenario you can thin…

LinkedIn Post: Mia Dand · 2022

This weekend, I tried out DALL-e mini hosted by Hugging Face. It's an AI model that generates images from any word prompt. Every image it generated for "expert" "data scientist" "computer scientist" showed some distorted version of a white …


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.