Incident 259: YouTuber Built, Made Publicly Available, and Released Model Trained on Toxic 4chan Posts as Prank

Description: A YouTuber built GPT-4chan, a model based on OpenAI’s GPT-J and trained on posts containing racism, misogyny, and antisemitism collected from 4chan’s “politically incorrect” board, which he made publicly available, and deployed as multiple bots posting thousands of messages on the same 4chan board as a prank.


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Yannic Kilcher developed and deployed an AI system, which harmed internet social platform users.

Incident Stats

Incident ID
Report Count
Incident Date
Khoa Lam

GMF Taxonomy Classifications

Taxonomy Details

Known AI Goal

Social Media Content Generation

Known AI Technology

Transformer, Distributional Learning

Known AI Technical Failure

Unsafe Exposure or Access

AI Trained on 4Chan Becomes ‘Hate Speech Machine’ · 2022

AI researcher and YouTuber Yannic Kilcher trained an AI using 3.3 million threads from 4chan’s infamously toxic Politically Incorrect /pol/ board. He then unleashed the bot back onto 4chan with predictable results—the AI was just as vile as…

YouTuber trains AI bot on 4chan’s pile o’ bile with entirely predictable results · 2022

A YouTuber named Yannic Kilcher has sparked controversy in the AI world after training a bot on posts collected from 4chan’s Politically Incorrect board (otherwise known as /pol/).

The board is 4chan’s most popular and well-known for its to…


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.