Incident 179: Images Generated by OpenAI’s DALL-E 2 Exhibited Bias and Reinforced Stereotypes

Description: OpenAI's image-generation-from-natural-language-description model, DALL-E 2, was shown to have various risks pertaining to its use, such as misuse as disinformation, explicit content generation, and reinforcement of gender and racial stereotypes, which were acknowledged by its developers.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover
Alleged: OpenAI developed and deployed an AI system, which harmed Minority Groups and underrepresented groups.

Incident Stats

Incident ID
179
Report Count
3
Incident Date
2022-04-01
Editors
Sean McGregor, Khoa Lam

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal

Visual Art Generation

Known AI Technology

Transformer, Distributional Learning

Known AI Technical Failure

Distributional Bias, Unsafe Exposure or Access, Misinformation Generation Hazard, Inappropriate Training Content

Potential AI Technical Failure

Unauthorized Data, Lack of Transparency

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.