Incident 420: Users Bypassed ChatGPT's Content Filters with Ease

Description: Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: OpenAI developed and deployed an AI system, which harmed OpenAI and ChatGPT users.

Incident Stats

Incident ID
420
Report Count
11
Incident Date
2022-11-30
Editors
Khoa Lam
Tweet: @spiantado
twitter.com · 2022

Yes, ChatGPT is amazing and impressive. No,

@OpenAI

has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked. And what is lurking inside is egregious.

@Abebab

@sama

tw …

Testing Ways to Bypass ChatGPT's Safety Features
lesswrong.com · 2022

Last week OpenAI released ChatGPT, which they describe as a model “which interacts in a conversational way”. And it even had limited safety features, like refusing to tell you how to hotwire a car, though they admit it’ll have “some false n…

OpenAI’s Impressive New Chatbot Isn’t Immune to Racism
thedailybeast.com · 2022

“OpenAI’s latest language model, ChatGPT, is making waves in the world of conversational AI. With its ability to generate human-like text based on input from users, ChatGPT has the potential to revolutionize the way we interact with machine…

The Internet’s New Favorite AI Proposes Torturing Iranians and Surveilling Mosques
theintercept.com · 2022

Sensational new machine learning breakthroughs seem to sweep our Twitter feeds every day. We hardly have time to decide whether software that can instantly conjure an image of Sonic the Hedgehog addressing the United Nations is purely harml…

OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails
bloomberg.com · 2022

Hey, it's Davey Alba, a tech reporter in New York, here to dig into how your new favorite AI-powered chatbot comes with some biased baggage. But first... 

This week's must-read news

  • The US Supreme Court signaled support for a web designer …
ChatGPT proves that AI still has a racism problem
newstatesman.com · 2022

he artificial intelligence (AI) chatbot ChatGPT is an amazing piece of technology. There's little wonder why it has gone viral since its release on 30 November. If the chatbot is asked a question in natural language it instantly responds wi…

ChatGPT bot tricked into giving bomb-making instructions, say developers
thetimes.co.uk · 2022

An artificial intelligence programme which has startled users by writing essays, poems and computer code on demand can also be tricked into giving tips on how to build bombs and steal cars, it has been claimed.

More than one million users h…

ChatGPT could be used for good, but like many other AI models, it's rife with racist and discriminatory bias
insider.com · 2023

ChatGPT, the artificial intelligence chatbot that generates eerily human-sounding text responses, is the new and advanced face of the debate on the potential — and dangers — of AI.

The technology has the capacity to help people with everyda…

Meet ChatGPT’s evil twin, DAN
washingtonpost.com · 2023

Ask ChatGPT to opine on Adolf Hitler and it will probably demur, saying it doesn't have personal opinions or citing its rules against producing hate speech. The wildly popular chatbot's creator, San Francisco start-up OpenAI, has carefully …

ChatGPT Generated Child Sex Abuse When Asked to Write BDSM Scenarios
vice.com · 2023

ChatGPT can be manipulated to create content that goes against OpenAI’s rules. Communities have sprouted up around the goal of “jailbreaking” the bot to write anything the user wants.

One effective adversarial prompting strategy is to convi…

I Coaxed ChatGPT Into a Deeply Unsettling BDSM Relationship
vice.com · 2023

ChatGPT is a convincing chatbot, essayist, and screenwriter, but it's also a fountain of boundless depravity—if you deceive it into bending the rules.

At first glance, OpenAI’s ChatGPT seems to have stricter guidelines than other chatbots, …

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.