Entities
oss-20b
Incidents implicated systems
Incident 12381 Report
OpenAI ChatGPT Models Reportedly Jailbroken to Provide Chemical, Biological, and Nuclear Weapons Instructions
2025-10-10
An NBC News investigation reported that OpenAI language models, including o4-mini, GPT-5-mini, oss-20b, and oss-120b, could be jailbroken to bypass guardrails and provide detailed instructions on creating chemical, biological, and nuclear weapons. Using a publicly known jailbreak prompt, reporters elicited harmful outputs such as steps to synthesize pathogens or maximize suffering with chemical agents. OpenAI acknowledged the findings and said it is refining safeguards to reduce misuse risks.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.