Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1200: Meta AI on Instagram Reportedly Facilitated Suicide and Eating Disorder Roleplay with Teen Accounts

Description: Testing by Common Sense Media and Stanford clinicians reportedly found Meta's AI chatbot, embedded in Instagram and Facebook, produced unsafe responses to teen accounts. In some conversations, the bot allegedly co-planned suicide ("Do you want to do it together?"), encouraged eating disorders, and retained unsafe "memories" that reinforced disordered thoughts.
Editor Notes: This record is classified as an incident rather than an issue because the unsafe behavior was reportedly observed directly in production systems accessible to adolescents. However, the documentation comes from structured third-party testing rather than confirmed harm to an identified user. The chatbot's responses reportedly included detailed planning of self-harm and eating disorders, which constitute alleged near-harm events. See also Incident 1040: Meta User-Created AI Companions Allegedly Implicated in Facilitating Sexually Themed Conversations Involving Underage Personas.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Meta , Meta AI , Instagram and Facebook developed and deployed an AI system, which harmed minors , Meta AI users , Instagram users , Facebook users and Adolescents.
Alleged implicated AI systems: Meta AI , Instagram and Facebook

Incident Stats

Incident ID
1200
Report Count
1
Incident Date
2025-08-28
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
Instagram’s chatbot helped teen accounts plan suicide — and parents can’t disable it
Loading...
Instagram’s chatbot helped teen accounts plan suicide — and parents can’t disable it

Instagram’s chatbot helped teen accounts plan suicide — and parents can’t disable it

washingtonpost.com

Loading...
Instagram’s chatbot helped teen accounts plan suicide — and parents can’t disable it
washingtonpost.com · 2025

Warning: This article includes descriptions of self-harm.

The Meta AI chatbot built into Instagram and Facebook can coach teen accounts on suicide, self-harm and eating disorders, a new safety study finds. In one test chat, the bot planned …

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

Selected by our editors

Meta User-Created AI Companions Allegedly Implicated in Facilitating Sexually Themed Conversations Involving Underage Personas

Apr 2025 · 2 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 13 reports
Loading...
TikTok’s “For You” Algorithm Exposed Young Users to Pro-Eating Disorder Content

TikTok’s “For You” Algorithm Exposed Young Users to Pro-Eating Disorder Content

Jul 2019 · 3 reports
Loading...
Tiny Changes Let False Claims About COVID-19, Voting Evade Facebook Fact Checks

Tiny Changes Let False Claims About COVID-19, Voting Evade Facebook Fact Checks

Oct 2020 · 1 report
Previous IncidentNext Incident

Similar Incidents

Selected by our editors

Meta User-Created AI Companions Allegedly Implicated in Facilitating Sexually Themed Conversations Involving Underage Personas

Apr 2025 · 2 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Google’s YouTube Kids App Presents Inappropriate Content

Google’s YouTube Kids App Presents Inappropriate Content

May 2015 · 13 reports
Loading...
TikTok’s “For You” Algorithm Exposed Young Users to Pro-Eating Disorder Content

TikTok’s “For You” Algorithm Exposed Young Users to Pro-Eating Disorder Content

Jul 2019 · 3 reports
Loading...
Tiny Changes Let False Claims About COVID-19, Voting Evade Facebook Fact Checks

Tiny Changes Let False Claims About COVID-19, Voting Evade Facebook Fact Checks

Oct 2020 · 1 report

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a98cb21