Description: Microsoft's AI research team accidentally exposed 38TB of sensitive data while publishing open-source training material on GitHub. The exposure included secrets, private keys, passwords, and internal Microsoft Teams messages. The team utilized Azure's Shared Access Signature (SAS) tokens for sharing, which were misconfigured, leading to the wide exposure of data.
Entities
View all entitiesAlleged: Microsoft's AI Research Division developed an AI system deployed by Microsoft, which harmed Microsoft , Microsoft employees and Third parties relying on the confidentiality of the exposed data.
Incident Stats
Incident ID
571
Report Count
1
Incident Date
2023-06-22
Editors
Daniel Atherton
Applied Taxonomies
CSETv1_Annotator-1 Taxonomy Classifications
Taxonomy DetailsIncident Number
The number of the incident in the AI Incident Database.
571
Incident Reports
Reports Timeline
wiz.io · 2023
- View the original report at its source
- View the report at the Internet Archive
-
Microsoft’s AI research team, while publishing a bucket of open-source training data on GitHub, accidentally exposed 38 terabytes of additional private data — including a disk backup of two employees’ workstations.
-
The backup includes …
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
Similar Incidents
Did our AI mess up? Flag the unrelated incidents