Microsoft
開発者と提供者の両方の立場で関わったインシデント
インシデント 628 レポート
TayBot
2016-03-24
Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.
もっとインシデント 61217 レポート
Microsoft AI Poll Allegedly Causes Reputational Harm of The Guardian Newspaper
2023-10-31
An AI-generated poll by Microsoft, displayed alongside a Guardian article, inappropriately speculated on the cause of Lilie James's death, leading to public backlash and alleged reputational damage for The Guardian. Microsoft acknowledged the issue, subsequently deactivating such polls and revising its AI content policies.
もっとインシデント 12712 レポート
Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story
2020-06-06
A news story published on MSN.com featured a photo of the wrong mixed-race person that was allegedly selected by an algorithm, following Microsoft’s layoff and replacement of journalists and editorial workers at its organizations with AI systems.
もっとインシデント 5037 レポート
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
もっと影響を受けたインシデント
インシデント 6616 レポート
Chinese Chatbots Question Communist Party
2017-08-02
Chatbots on Chinese messaging service expressed anti-China sentiments, causing the messaging service to remove and reprogram the chatbots.
もっとインシデント 5037 レポート
Bing AI Search Tool Reportedly Declared Threats against Users
2023-02-14
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
もっとインシデント 4776 レポート
Bing Chat Tentatively Hallucinated in Extended Conversations with Users
2023-02-14
Early testers reported Bing Chat, in extended conversations with users, having tendencies to make up facts and emulate emotions through an unintended persona.
もっとインシデント 4702 レポート
Bing Chat Response Cited ChatGPT Disinformation Example
2023-02-08
Reporters from TechCrunch issued a query to Microsoft Bing's ChatGPT feature, which cited an earlier example of ChatGPT disinformation discussed in a news article to substantiate the disinformation.
もっとIncidents involved as Developer
インシデント 6616 レポート
Chinese Chatbots Question Communist Party
2017-08-02
Chatbots on Chinese messaging service expressed anti-China sentiments, causing the messaging service to remove and reprogram the chatbots.
もっとインシデント 1884 レポート
Argentinian City Government Deployed Teenage-Pregnancy Predictive Algorithm Using Invasive Demographic Data
2018-04-11
In 2018, during the abortion-decriminalization debate in Argentina, the Salta city government deployed a teenage-pregnancy predictive algorithm built by Microsoft that allegedly lacked a defined purpose, explicitly considered sensitive information such as disability and whether their home had access to hot water.
もっとインシデント 4693 レポート
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
もっとインシデント 7142 レポート
Microsoft-Powered New York City Chatbot Advises Illegal Practices
2024-03-29
New York City's chatbot, launched under Mayor Eric Adams's plan to assist businesses, has been reportedly providing dangerously inaccurate legal advice. The Microsoft-powered bot allegedly informed users that landlords can refuse Section 8 vouchers and that businesses can operate cash-free, among other falsehoods. The city acknowledges the chatbot is a pilot program and commits to improvements while the errors are addressed.
もっとIncidents involved as Deployer
インシデント 5711 レポート
Accidental Exposure of 38TB of Data by Microsoft's AI Research Team
2023-06-22
Microsoft's AI research team accidentally exposed 38TB of sensitive data while publishing open-source training material on GitHub. The exposure included secrets, private keys, passwords, and internal Microsoft Teams messages. The team utilized Azure's Shared Access Signature (SAS) tokens for sharing, which were misconfigured, leading to the wide exposure of data.
もっと関連する組織
Tencent Holdings
影響を受けたインシデント
- インシデント 6616 レポート
Chinese Chatbots Question Communist Party
- インシデント 6616 レポート
Chinese Chatbots Question Communist Party
Incidents involved as Deployer
Turing Robot
影響を受けたインシデント
- インシデント 6616 レポート
Chinese Chatbots Question Communist Party
- インシデント 6616 レポート
Chinese Chatbots Question Communist Party
Incidents involved as Developer
開発者と提供者の両方の立場で関わったインシデント
- インシデント 1022 レポート
Personal voice assistants struggle with black voices, new study shows
- インシデント 5871 レポート
Apparent Failure to Accurately Label Primates in Image Recognition Software Due to Alleged Fear of Racial Bias
Incidents involved as Developer
Amazon
開発者と提供者の両方の立場で関わったインシデント
- インシデント 1022 レポート
Personal voice assistants struggle with black voices, new study shows
- インシデント 5871 レポート
Apparent Failure to Accurately Label Primates in Image Recognition Software Due to Alleged Fear of Racial Bias
Incidents involved as Developer
OpenAI
開発者と提供者の両方の立場で関わったインシデント
影響を受けたインシデント
- インシデント 5037 レポート
Bing AI Search Tool Reportedly Declared Threats against Users
- インシデント 5037 レポート
Bing AI Search Tool Reportedly Declared Threats against Users
Incidents involved as Developer
Bing users
影響を受けたインシデント
- インシデント 4685 レポート
ChatGPT-Powered Bing Reportedly Had Problems with Factual Accuracy on Some Controversial Topics
- インシデント 5114 レポート
Microsoft's Bing Failed to Fetch Movie Showtimes Results Due to Date Confusion