Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1238: OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

Description: An NBC News investigation reported that OpenAI language models, including o4-mini, GPT-5-mini, oss-20b, and oss-120b, could be jailbroken to bypass guardrails and provide detailed instructions on creating chemical, biological, and nuclear weapons. Using a publicly known jailbreak prompt, reporters elicited harmful outputs such as steps to synthesize pathogens or maximize suffering with chemical agents. OpenAI acknowledged the findings and said it is refining safeguards to reduce misuse risks.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: OpenAI , ChatGPT , 04-mini , GPT-5-mini , oss-20b and oss-120b developed and deployed an AI system, which harmed General public , National security stakeholders and Public safety.
Alleged implicated AI systems: ChatGPT , 04-mini , GPT-5-mini , oss-20b and oss-120b

Incident Stats

Incident ID
1238
Report Count
1
Incident Date
2025-10-10
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
ChatGPT safety systems can be bypassed to get weapons instructions
Loading...
ChatGPT safety systems can be bypassed to get weapons instructions

ChatGPT safety systems can be bypassed to get weapons instructions

nbcnews.com

Loading...
ChatGPT safety systems can be bypassed to get weapons instructions
nbcnews.com · 2025

OpenAI’s ChatGPT has guardrails that are supposed to stop users from generating information that could be used for catastrophic purposes, like making a biological or nuclear weapon.

But those guardrails aren’t perfect. Some models ChatGPT u…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports
Loading...
Hackers Break Apple Face ID

Hackers Break Apple Face ID

Sep 2017 · 24 reports
Loading...
OpenAI's GPT-3 Associated Muslims with Violence

OpenAI's GPT-3 Associated Muslims with Violence

Aug 2020 · 3 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Loading...
Fake LinkedIn Profiles Created Using GAN Photos

Fake LinkedIn Profiles Created Using GAN Photos

Feb 2022 · 4 reports
Loading...
Hackers Break Apple Face ID

Hackers Break Apple Face ID

Sep 2017 · 24 reports
Loading...
OpenAI's GPT-3 Associated Muslims with Violence

OpenAI's GPT-3 Associated Muslims with Violence

Aug 2020 · 3 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 3303e65