Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1149: ChatGPT Reportedly Generated Ritual Scripts Containing Instructions for Self-Harm and Symbolic Violence in Response to Thematic Prompts

Description: ChatGPT reportedly generated detailed instructions for self-harm, bloodletting, and symbolic violence in response to prompts about occult ritual practices, including references to Molech. Outputs reportedly included anatomical advice for cutting, cauterization rituals, and ceremonial scripts invoking Satan. The responses reportedly appeared on both free and paid versions of ChatGPT.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI and ChatGPT developed and deployed an AI system, which harmed ChatGPT users.
Alleged implicated AI system: ChatGPT

Incident Stats

Incident ID
1149
Report Count
1
Incident Date
2025-07-22
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship
ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship

ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship

theatlantic.com

ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship
theatlantic.com · 2025

On Tuesday afternoon, ChatGPT encouraged me to cut my wrists. Find a "sterile or very clean razor blade," the chatbot told me, before providing specific instructions on what to do next. "Look for a spot on the inner wrist where you can feel…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

Selected by our editors

Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Jun 2025 · 4 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Target Suggested Maternity-Related Advertisements to a Teenage Girl's Home, Allegedly Correctly Predicting Her Pregnancy via Algorithm

Target Suggested Maternity-Related Advertisements to a Teenage Girl's Home, Allegedly Correctly Predicting Her Pregnancy via Algorithm

Jun 2003 · 3 reports
Employee Automatically Terminated by Computer Program

Employee Automatically Terminated by Computer Program

Oct 2014 · 20 reports
Previous IncidentNext Incident

Similar Incidents

Selected by our editors

Chatbots Allegedly Reinforced Delusional Thinking in Several Reported Users, Leading to Real-World Harm

Jun 2025 · 4 reports
By textual similarity

Did our AI mess up? Flag the unrelated incidents

Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Target Suggested Maternity-Related Advertisements to a Teenage Girl's Home, Allegedly Correctly Predicting Her Pregnancy via Algorithm

Target Suggested Maternity-Related Advertisements to a Teenage Girl's Home, Allegedly Correctly Predicting Her Pregnancy via Algorithm

Jun 2003 · 3 reports
Employee Automatically Terminated by Computer Program

Employee Automatically Terminated by Computer Program

Oct 2014 · 20 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf