Microsoft
Incidents impliqués en tant que développeur et déployeur
Incident 628 Rapports
TayBot
2016-03-24
Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.
PlusIncident 61217 Rapports
Microsoft AI Poll Allegedly Causes Reputational Harm of The Guardian Newspaper
2023-10-31
An AI-generated poll by Microsoft, displayed alongside a Guardian article, inappropriately speculated on the cause of Lilie James's death, leading to public backlash and alleged reputational damage for The Guardian. Microsoft acknowledged the issue, subsequently deactivating such polls and revising its AI content policies.
PlusIncident 12712 Rapports
Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story
2020-06-06
A news story published on MSN.com featured a photo of the wrong mixed-race person that was allegedly selected by an algorithm, following Microsoft’s layoff and replacement of journalists and editorial workers at its organizations with AI systems.
PlusIncident 5037 Rapports
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
PlusAffecté par des incidents
Incident 6616 Rapports
Chinese Chatbots Question Communist Party
2017-08-02
Chatbots on Chinese messaging service expressed anti-China sentiments, causing the messaging service to remove and reprogram the chatbots.
PlusIncident 5037 Rapports
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
PlusIncident 4776 Rapports
Bing Chat Tentatively Hallucinated in Extended Conversations with Users
2023-02-14
Early testers reported Bing Chat, in extended conversations with users, having tendencies to make up facts and emulate emotions through an unintended persona.
PlusIncident 9555 Rapports
Global Cybercrime Network Storm-2139 Allegedly Exploits AI to Generate Deepfake Content
2024-12-19
A global cybercrime network, Storm-2139, allegedly exploited stolen credentials and developed custom tools to bypass AI safety guardrails. They reportedly generated harmful deepfake content, including nonconsensual intimate images of celebrities, and their software is reported to have disabled content moderation, hijacked AI access, and resold illicit services. Microsoft disrupted the operation and filed a lawsuit in December 2024, later identifying key members of the network in February 2025.
PlusIncidents involved as Developer
Incident 96824 Rapports
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
PlusIncident 6616 Rapports
Chinese Chatbots Question Communist Party
2017-08-02
Chatbots on Chinese messaging service expressed anti-China sentiments, causing the messaging service to remove and reprogram the chatbots.
PlusIncident 1884 Rapports
Argentinian City Government Deployed Teenage-Pregnancy Predictive Algorithm Using Invasive Demographic Data
2018-04-11
In 2018, during the abortion-decriminalization debate in Argentina, the Salta city government deployed a teenage-pregnancy predictive algorithm built by Microsoft that allegedly lacked a defined purpose, explicitly considered sensitive information such as disability and whether their home had access to hot water.
PlusIncident 4693 Rapports
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
PlusIncidents involved as Deployer
Incident 5711 Rapport
Accidental Exposure of 38TB of Data by Microsoft's AI Research Team
2023-06-22
Microsoft's AI research team accidentally exposed 38TB of sensitive data while publishing open-source training material on GitHub. The exposure included secrets, private keys, passwords, and internal Microsoft Teams messages. The team utilized Azure's Shared Access Signature (SAS) tokens for sharing, which were misconfigured, leading to the wide exposure of data.
PlusEntités liées
Autres entités liées au même incident. Par exemple, si le développeur d'un incident est cette entité mais que le responsable de la mise en œuvre est une autre entité, ils sont marqués comme entités liées.
Entités liées
Tencent Holdings
Affecté par des incidents
- Incident 6616 Rapports
Chinese Chatbots Question Communist Party
- Incident 6616 Rapports
Chinese Chatbots Question Communist Party
Incidents involved as Deployer
Turing Robot
Affecté par des incidents
- Incident 6616 Rapports
Chinese Chatbots Question Communist Party
- Incident 6616 Rapports
Chinese Chatbots Question Communist Party
Incidents involved as Developer
Incidents impliqués en tant que développeur et déployeur
- Incident 7344 Rapports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 1022 Rapports
Personal voice assistants struggle with black voices, new study shows
Affecté par des incidents
- Incident 9561 Rapport
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks
- Incident 9561 Rapport
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks
Incidents involved as Developer
Amazon
Incidents impliqués en tant que développeur et déployeur
- Incident 1022 Rapports
Personal voice assistants struggle with black voices, new study shows
- Incident 5871 Rapport
Apparent Failure to Accurately Label Primates in Image Recognition Software Due to Alleged Fear of Racial Bias
Incidents involved as Developer
OpenAI
Incidents impliqués en tant que développeur et déployeur
- Incident 7344 Rapports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 9952 Rapports
The New York Times Sues OpenAI and Microsoft Over Alleged Unauthorized AI Training on Its Content
Affecté par des incidents
- Incident 5037 Rapports
Bing AI Search Tool Reportedly Declared Threats against Users
- Incident 5037 Rapports
Bing AI Search Tool Reportedly Declared Threats against Users
Incidents involved as Developer
Bing users
Affecté par des incidents
- Incident 4685 Rapports
ChatGPT-Powered Bing Reportedly Had Problems with Factual Accuracy on Some Controversial Topics
- Incident 5114 Rapports
Microsoft's Bing Failed to Fetch Movie Showtimes Results Due to Date Confusion
Incidents involved as Deployer
Meta
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Incidents involved as Deployer
members of racial and ethnic minorities who risk being stereotyped or misrepresented
Affecté par des incidents
You.com
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
xAI
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Perplexity
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Mistral
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Inflection
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Anthropic
Incidents impliqués en tant que développeur et déployeur
Incidents involved as Developer
Microsoft Copilot
Incidents involved as Deployer
- Incident 7703 Rapports
Microsoft Copilot Falsely Accuses Journalist Martin Bernklau of Crimes
- Incident 8381 Rapport
Microsoft Copilot Allegedly Provides Unsafe Medical Advice with High Risk of Severe Harm
Incidents implicated systems
Western Australia Department of Justice
Affecté par des incidents
Incidents involved as Deployer
Unidentified Storm-2139 actor from Illinois
Incidents impliqués en tant que développeur et déployeur
Unidentified Storm-2139 actor from Florida
Incidents impliqués en tant que développeur et déployeur
Microsoft Azure OpenAI Service
Incidents involved as Deployer
- Incident 9561 Rapport
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks
- Incident 9561 Rapport
Alleged Inclusion of 12,000 Live API Keys in LLM Training Data Reportedly Poses Security Risks