Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

04-mini

Incidents implicated systems

Incident 12381 Report
OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

2025-10-10

An NBC News investigation reported that OpenAI language models, including o4-mini, GPT-5-mini, oss-20b, and oss-120b, could be jailbroken to bypass guardrails and provide detailed instructions on creating chemical, biological, and nuclear weapons. Using a publicly known jailbreak prompt, reporters elicited harmful outputs such as steps to synthesize pathogens or maximize suffering with chemical agents. OpenAI acknowledged the findings and said it is refining safeguards to reduce misuse risks.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

OpenAI

Incidents involved as both Developer and Deployer
  • Incident 1238
    1 Report

    OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

More
Entity

General public

Incidents Harmed By
  • Incident 1238
    1 Report

    OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

More
Entity

National security stakeholders

Incidents Harmed By
  • Incident 1238
    1 Report

    OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

More
Entity

Public safety

Incidents Harmed By
  • Incident 1238
    1 Report

    OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

More
Entity

ChatGPT

Incidents implicated systems
  • Incident 1238
    1 Report

    OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

More
Entity

GPT-5-mini

Incidents implicated systems
  • Incident 1238
    1 Report

    OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

More
Entity

oss-20b

Incidents implicated systems
  • Incident 1238
    1 Report

    OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

More
Entity

oss-120b

Incidents implicated systems
  • Incident 1238
    1 Report

    OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 3303e65