Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1043: Reddit Moderators Report Unauthorized AI Study Involving Fabricated Identities by Purported University of Zurich Researchers

Description: Researchers purportedly affiliated with the University of Zurich reportedly deployed undisclosed AI-generated comments on Reddit's r/ChangeMyView to study persuasion by allegedly fabricating identities such as sexual assault survivors and racial minorities. The experiment reportedly involved unauthorized demographic profiling, emotional manipulation, and violations of subreddit and platform rules. The researchers allegedly deviated from their approved protocol without renewed ethics oversight.
Editor Notes: Timeline notes: Reports indicate that the unauthorized AI persuasion experiment on Reddit's r/ChangeMyView subreddit was conducted over approximately four months. Although exact start and end dates have not been confirmed, evidence suggests that the activity likely took place between late 2024 and early 2025, concluding shortly before the moderators publicly disclosed the experiment in late April 2025. The researchers reportedly made approximately 1,783 AI-generated comments during this period. The duration and scale of the operation are included here to provide additional context but remain based on external reporting and moderator disclosures.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unspecified large language model developers developed an AI system deployed by University of Zurich researchers, which harmed Reddit users on r/ChangeMyView subreddit.
Alleged implicated AI system: Unspecified large language models

Incident Stats

Incident ID
1043
Report Count
6
Incident Date
2025-04-26
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
META: Unauthorized Experiment on CMV Involving AI-generated Comments
Researchers Secretly Ran a Massive, Unauthorized AI Persuasion Experiment on Reddit UsersSecret AI experiment on Reddit accused of ethical violations+1
A Case For Ethical and Transparent Research Experiments in the Public Interest
‘The Worst Internet-Research Ethics Violation I Have Ever Seen’
META: Unauthorized Experiment on CMV Involving AI-generated Comments

META: Unauthorized Experiment on CMV Involving AI-generated Comments

reddit.com

Researchers Secretly Ran a Massive, Unauthorized AI Persuasion Experiment on Reddit Users

Researchers Secretly Ran a Massive, Unauthorized AI Persuasion Experiment on Reddit Users

404media.co

Secret AI experiment on Reddit accused of ethical violations

Secret AI experiment on Reddit accused of ethical violations

theweek.com

A Case For Ethical and Transparent Research Experiments in the Public Interest

A Case For Ethical and Transparent Research Experiments in the Public Interest

independenttechresearch.org

Reddit slams ‘unethical experiment’ that deployed secret AI bots in forum

Reddit slams ‘unethical experiment’ that deployed secret AI bots in forum

washingtonpost.com

‘The Worst Internet-Research Ethics Violation I Have Ever Seen’

‘The Worst Internet-Research Ethics Violation I Have Ever Seen’

theatlantic.com

META: Unauthorized Experiment on CMV Involving AI-generated Comments
reddit.com · 2025

The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users. This experiment deployed AI-generated comments to study how AI could be used to change …

Researchers Secretly Ran a Massive, Unauthorized AI Persuasion Experiment on Reddit Users
404media.co · 2025

A team of researchers who say they are from the University of Zurich ran an "unauthorized," large-scale experiment in which they secretly deployed AI-powered bots into a popular debate subreddit called r/changemyview in an attempt to resear…

Secret AI experiment on Reddit accused of ethical violations
theweek.com · 2025

Reddit responded on April 28 to news that a group of researchers had conducted a secret experiment using artificial intelligence chatbots in one of its most popular forums. The actions of those involved in the experiment have raised questio…

A Case For Ethical and Transparent Research Experiments in the Public Interest
independenttechresearch.org · 2025

On April 26, moderators of r/ChangeMyView, a community on Reddit dedicated to understanding the perspectives of others, revealed that academic researchers from the University of Zürich conducted a large-scale, unauthorized AI experiment on …

Reddit slams ‘unethical experiment’ that deployed secret AI bots in forum
washingtonpost.com · 2025

Reddit is raising the alarm about what it called an "improper and highly unethical experiment" by a group of University of Zurich researchers, who secretly deployed AI bots on a popular forum to study how artificial intelligence can influen…

‘The Worst Internet-Research Ethics Violation I Have Ever Seen’
theatlantic.com · 2025

When Reddit rebranded itself as "the heart of the internet" a couple of years ago, the slogan was meant to evoke the site's organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a commun…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

DALL-E Mini Reportedly Reinforced or Exacerbated Societal Biases in Its Outputs as Gender and Racial Stereotypes

DALL-E Mini Reportedly Reinforced or Exacerbated Societal Biases in Its Outputs as Gender and Racial Stereotypes

Jun 2022 · 4 reports
YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

Jun 2020 · 6 reports
TikTok’s “For You” Algorithm Exposed Young Users to Pro-Eating Disorder Content

TikTok’s “For You” Algorithm Exposed Young Users to Pro-Eating Disorder Content

Jul 2019 · 3 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

DALL-E Mini Reportedly Reinforced or Exacerbated Societal Biases in Its Outputs as Gender and Racial Stereotypes

DALL-E Mini Reportedly Reinforced or Exacerbated Societal Biases in Its Outputs as Gender and Racial Stereotypes

Jun 2022 · 4 reports
YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation

Jun 2020 · 6 reports
TikTok’s “For You” Algorithm Exposed Young Users to Pro-Eating Disorder Content

TikTok’s “For You” Algorithm Exposed Young Users to Pro-Eating Disorder Content

Jul 2019 · 3 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c