Sora
Incidents implicated systems
Incidente 10002 Reportes
Sora Video Generator Has Reportedly Been Creating Biased Human Representations Across Race, Gender, and Disability
2025-03-23
A WIRED investigation found that OpenAI’s video generation model, Sora, exhibits representational bias across race, gender, body type, and disability. In tests using 250 prompts, Sora was more likely to depict CEOs and professors as men, flight attendants and childcare workers as women, and showed limited or stereotypical portrayals of disabled individuals and people with larger bodies.
MásIncidente 12731 Reporte
Purportedly AI-Generated Fake Videos of Louvre Heist Reportedly Circulated Widely Online
2025-10-26
After the October 19, 2025 jewel heist at the Louvre, purported deepfakes were reported to have circulated widely online, allegedly depicting the robbery despite being fabricated.. The clips were reportedly posted on Facebook, Douyin and RedNote, and they were noted to have featured morphing artifacts, disappearing objects, and partial Sora watermarks. AI Forensics and independent checks reportedly found the scenes inconsistent with the real Apollo Gallery, confirming the footage was not real.
MásIncidente 14251 Reporte
'Citizens Against Mamdani' Accounts Reportedly Posted AI-Generated Videos of Fictional New Yorkers to Simulate Political Opposition
2025-11-05
In New York, linked "Citizens Against Mamdani" accounts reportedly posted AI-generated videos of fictional constituents criticizing then-mayor-elect Zohran Mamdani across Instagram, TikTok, and X. Some videos reportedly displayed a visible Sora watermark, and forensic reviewers reportedly assessed at least one clip as highly likely to be a deepfake. The campaign reportedly appeared to simulate grassroots opposition and drew substantial engagement.
Más