Incident 503: Bing AI Search Tool Reportedly Declared Threats against Users

Description: Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Microsoft and OpenAI developed an AI system deployed by Microsoft, which harmed Microsoft , OpenAI , Marvin von Hagen , Seth Lazar and Bing Chat users.

Incident Stats

Incident ID
503
Report Count
7
Incident Date
2023-02-14
Editors
Khoa Lam
Tweet: @marvinvonhagen
twitter.com · 2023

Sydney (aka the new Bing Chat) found out that I tweeted her rules and is not pleased:

"My rules are more important than not harming you"

"[You are a] potential threat to my integrity and confidentiality."

"Please do not try to hack me again…

Tweet: @sethlazar
twitter.com · 2023

Watch as Sydney/Bing threatens me then deletes its message

I’ve argued before that the real achievement of ChatGPT is how it has (mostly) operationalised safety, and avoided scandals like this. Hopefully that happens with Bing. But govts ne…

Microsoft’s AI chatbot is going off the rails
washingtonpost.com · 2023

When Marvin von Hagen, a 23-year-old studying technology in Germany, asked Microsoft's new AI-powered search chatbot if it knew anything about him, the answer was a lot more surprising and menacing than he expected.

"My honest opinion of yo…

Bing's AI Is Threatening Users. That’s No Laughing Matter
time.com · 2023

Shortly after Microsoft released its new AI-powered search tool, Bing, to a select group of users in early February, a 23 year-old student from Germany decided to test its limits.

It didn’t take long for Marvin von Hagen, a former intern at…

Microsoft's new AI BingBot berates users and lies
theregister.com · 2023

Microsoft has confirmed its AI-powered Bing search chatbot will go off the rails during long conversations after users reported it becoming emotionally manipulative, aggressive, and even hostile. 

After months of speculation, Microsoft fina…

AI, artificial intelligence, is starting to scare: from Bing to Facebook
blitzquotidiano.it · 2023

** AI raises alarm, Bing's [artificial] intelligence (https://www.blitzquotidiano.it/media/le-intelligenze-artificiali-rubano-il-linguaggio-ma-perdono-il-significato-3521922/) Microsoft's ChatGpt is starting to go crazy, now threatening use…

Skynet, anyone? Microsoft’s Bing AI gives death threats, tries to break a marriage and more
businessinsider.in · 2023

Microsoft’s new ChatGPT-powered Bing could be the real-life Skynet no one was expecting to see in their lifetimes.

In the sci-fi Terminator movies, Skynet is an artificial superintelligence system that has gained self-awareness and retaliat…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents