Description: Across 2024 and 2025, the share features in multiple LLM platforms, including ChatGPT, Claude, Copilot, Qwen, Mistral, and Grok, allegedly exposed user conversations marked "discoverable" to search engines and archiving services. Over 100,000 chats were reportedly indexed and later scraped, purportedly revealing API keys, access tokens, personal identifiers, and sensitive business data.
Editor Notes: Timeline note: According to reporting, a lot of the "discoverable" chats were archived by the Internet Archive by at least June 2024, but actual reporting on the incident did not appear to arise until the end of July and early August 2025. This incident ID was created 08/25/2025 to capture the range of reporting on this cluster of related harms. See also Incident 1172: Meta AI Bug in Deployed Service Reportedly Allowed Potential Access to Other Users' Prompts and Responses.
Entities
View all entitiesAlleged: OpenAI , Anthropic , Microsoft , xAI , Alibaba , Mistral , ChatGPT , Claude , Copilot , Grok , Qwen , Internet Archive , Wayback Machine and Google developed and deployed an AI system, which harmed Users of ChatGPT , Users of Claude , Users of Copilot , Users of Grok , Users of Qwen , Users of Mistral and General public.
Alleged implicated AI systems: Mistral , ChatGPT , Claude , Copilot , Grok , Qwen , Internet Archive , Wayback Machine and Google
Incident Stats
Incident ID
1186
Report Count
1
Incident Date
2025-07-31
Editors
Daniel Atherton
Incident Reports
Reports Timeline
Loading...
What started as a sharing feature turned into an unexpected SEO exposure. Over 100,000 ChatGPT chats were publicly accessible, some indexed, and many archived long before OpenAI responded.
It all started with something simple: a "Share" but…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents