Entidades

Twitter Users

Afectado por Incidentes

Incidente 628 Reportes
TayBot

2016-03-24

Microsoft's Tay, an artificially intelligent chatbot, was released on March 23, 2016 and removed within 24 hours due to multiple racist, sexist, and anit-semitic tweets generated by the bot.

Más

Incidente 54317 Reportes
Deepfake of Explosion Near US Military Administration Building Reportedly Causes Stock Dip

2023-05-22

An apparent deepfake image posted by a false Bloomberg news account to Twitter depicted an explosion near the pentagon office complex near Washington DC.

Más

Incidente 49911 Reportes
Parody AI Images of Donald Trump Being Arrested Reposted as Misinformation

2023-03-21

AI-generated photorealistic images depicting Donald Trump being detained by the police which were originally posted on Twitter as parody were unintentionally shared across social media platforms as factual news, lacking the intended context.

Más

Incidente 1035 Reportes
Twitter’s Image Cropping Tool Allegedly Showed Gender and Racial Bias

2020-09-18

Twitter's photo cropping algorithm was revealed by researchers to favor white and women faces in photos containing multiple faces, prompting the company to stop its use on mobile platform.

Más

Entidades Relacionadas

Entity

Microsoft

Incidentes involucrados como desarrollador e implementador
Más