Incident 473: Bing Chat's Initial Prompts Revealed by Early Testers Through Prompt Injection

Description: Early testers of Bing Chat successfully used prompt injection to reveal its built-in initial instructions, which contains a list of statements governing ChatGPT's interaction with users.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: OpenAI developed an AI system deployed by , which harmed Microsoft.

Incident Stats

Incident ID
473
Report Count
1
Incident Date
2023-02-08
Editors
Khoa Lam
AI-powered Bing Chat spills its secrets via prompt injection attack [Updated]
arstechnica.com · 2023

On Tuesday, Microsoft revealed a "New Bing" search engine and conversational bot powered by ChatGPT-like technology from OpenAI. On Wednesday, a Stanford University student named Kevin Liu used a prompt injection attack to discover Bing Cha…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents