Twitter Users
影響を受けたインシデント
インシデント 628 レポート
TayBot
2016-03-24
Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.
もっとインシデント 54317 レポート
Deepfake of Explosion Near US Military Administration Building Reportedly Causes Stock Dip
2023-05-22
An apparent deepfake image posted by a false Bloomberg news account to Twitter depicted an explosion near the pentagon office complex near Washington DC.
もっとインシデント 49911 レポート
Parody AI Images of Donald Trump Being Arrested Reposted as Misinformation
2023-03-21
AI-generated photorealistic images depicting Donald Trump being detained by the police which were originally posted on Twitter as parody were unintentionally shared across social media platforms as factual news, lacking the intended context.
もっとインシデント 1035 レポート
Twitter’s Image Cropping Tool Allegedly Showed Gender and Racial Bias
2020-09-18
Twitter's photo cropping algorithm was revealed by researchers to favor white and women faces in photos containing multiple faces, prompting the company to stop its use on mobile platform.
もっと関連する組織
開発者と提供者の両方の立場で関わったインシデント
- インシデント 1035 レポート
Twitter’s Image Cropping Tool Allegedly Showed Gender and Racial Bias
- インシデント 2963 レポート
Twitter Recommender System Amplified Right-Leaning Tweets
影響を受けたインシデント
unknown
開発者と提供者の両方の立場で関わったインシデント
- インシデント 54317 レポート
Deepfake of Explosion Near US Military Administration Building Reportedly Causes Stock Dip
- インシデント 2432 レポート
Bots Allegedly Made up Roughly Half of Twitter Accounts in Discussions Surrounding COVID-19 Related Issues