Incident 39: Deepfake Obama Introduction of Deepfakes

Description: University of Washington researchers made a deepfake of Obama, followed by Jordan Peele


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover
Alleged: University of Washington and FakeApp developed and deployed an AI system, which harmed Barack Obama.

Incident Stats

Incident ID
Report Count
Incident Date
Sean McGregor

CSET Taxonomy Classifications

Taxonomy Details

Full Description

In 2017, researchers at the University of Washington used 14 hours of audio and video clips spoken by President Barack Obama to create a deepfake video. One year later, comedian Jordan Peele also created a fake video of Obama to highlight the ease of faking public statements, but used his own voice impression instead.

Short Description

University of Washington researchers made a deepfake of Obama, followed by Jordan Peele



Harm Type

Harm to social or political systems

AI System Description

In the 2017 case, a recurrent neural network was used to generate synthetic video. In the 2018 case, a proprietary model developed by FakeApp was used to generate synthetic video.

System Developer

University of Washington, FakeApp

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Action

AI Techniques


AI Applications

Content curation



Named Entities

University of Washington, Jordan Peele, Barack Obama

Technology Purveyor

University of Washington, Jordan Peele

Beginning Date


Ending Date


Near Miss

Near miss


Deliberate or expected

Lives Lost


Data Inputs

14 hours of footage from Obama's public statements and addresses (2017), Jordan Peele's voice and lip movements (2018)

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal

Deepfake Video Generation

Known AI Technology

Neural Network, Face Detection, Recurrent Neural Network, Generative Adversarial Network

Potential AI Technology

3D reconstruction

Potential AI Technical Failure

Misinformation Generation Hazard

AI Creates Fake Obama

AI Creates Fake Obama


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents


· 28 reports