Description: Researchers purportedly affiliated with the University of Zurich reportedly deployed undisclosed AI-generated comments on Reddit's r/ChangeMyView to study persuasion by allegedly fabricating identities such as sexual assault survivors and racial minorities. The experiment reportedly involved unauthorized demographic profiling, emotional manipulation, and violations of subreddit and platform rules. The researchers allegedly deviated from their approved protocol without renewed ethics oversight.
Editor Notes: Timeline notes: Reports indicate that the unauthorized AI persuasion experiment on Reddit's r/ChangeMyView subreddit was conducted over approximately four months. Although exact start and end dates have not been confirmed, evidence suggests that the activity likely took place between late 2024 and early 2025, concluding shortly before the moderators publicly disclosed the experiment in late April 2025. The researchers reportedly made approximately 1,783 AI-generated comments during this period. The duration and scale of the operation are included here to provide additional context but remain based on external reporting and moderator disclosures.
Entities
View all entitiesAlleged: Unspecified large language model developers developed an AI system deployed by University of Zurich researchers, which harmed Reddit users on r/ChangeMyView subreddit.
Alleged implicated AI system: Unspecified large language models
Incident Stats
Incident ID
1043
Report Count
2
Incident Date
2025-04-26
Editors
Daniel Atherton
Incident Reports
Reports Timeline
The CMV Mod Team needs to inform the CMV community about an unauthorized experiment conducted by researchers from the University of Zurich on CMV users. This experiment deployed AI-generated comments to study how AI could be used to change …
A team of researchers who say they are from the University of Zurich ran an "unauthorized," large-scale experiment in which they secretly deployed AI-powered bots into a popular debate subreddit called r/changemyview in an attempt to resear…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.