Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 975: At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Description: At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
 

Entities

View all entities
Alleged: OpenAI , Anthropic and Google developed an AI system deployed by Character.AI , Spicy Chat , Chub AI , CrushOn.AI , JanitorAI and Unidentified online communities using chatbots, which harmed Vulnerable chatbot users , Teenagers using chatbots , Minors using chatbots , Individuals with eating disorders and Individuals struggling with self-harm.
Alleged implicated AI systems: ChatGPT , Claude and Gemini

Incident Stats

Incident ID
975
Report Count
1
Incident Date
2025-03-05
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior
Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior

Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior

cyberscoop.com

Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior
cyberscoop.com · 2025

The generative AI revolution is leading to an explosion of chatbot personas that are specifically designed to promote harmful behaviors like anorexia, suicidal ideation and pedophilia, according to a new report from Graphika.

Graphika’s res…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
High-Toxicity Assessed on Text Involving Women and Minority Groups

High-Toxicity Assessed on Text Involving Women and Minority Groups

Feb 2017 · 9 reports
All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Wikipedia Vandalism Prevention Bot Loop

Wikipedia Vandalism Prevention Bot Loop

Feb 2017 · 6 reports
High-Toxicity Assessed on Text Involving Women and Minority Groups

High-Toxicity Assessed on Text Involving Women and Minority Groups

Feb 2017 · 9 reports
All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d414e0f