General public
影響を受けたインシデント
インシデント 61643 レポート
Sports Illustrated Is Alleged to Have Used AI to Invent Fake Authors and Their Articles
2023-11-27
Sports Illustrated, managed by The Arena Group, allegedly used AI-generated authors and content, compromising journalistic integrity. Profiles of these fictitious authors, complete with AI-generated headshots, appeared alongside articles, misleading readers. The issue was exposed when inconsistencies in author identities and writing quality were noticed, leading to the removal of this content from the publication's website.
もっとインシデント 64535 レポート
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
2024-02-21
Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
もっとインシデント 63231 レポート
Significant Increase in Deepfake Nudes of Taylor Swift Circulating on Social Media
2024-01-24
AI-generated sexually explicit images of Taylor Swift circulated on X, garnering over 45 million views before removal. Originating from a Telegram group, these deepfakes challenge content moderation, as X's policies against synthetic media and nonconsensual nudity were violated.
もっとインシデント 62630 レポート
Social Media Scammers Used Deepfakes of Taylor Swift and Several Other Celebrities in Fraudulent Le Creuset Cookware Giveaways
2023-12-26
Scammers reportedly made deepfakes of Taylor Swift, Selena Gomez, Joanna Gaines, Lainey Wilson, Ree Drummond, Oprah, Jennifer Lopez, Trisha Yearwood, Martha Stewart, and Blake Shelton promoting a Le Creuset giveaway. These AI-generated ads, appearing on Meta and TikTok, falsely claimed users could receive free cookware by paying a small shipping fee. Victims were unknowingly enrolled in a costly monthly subscription.
もっと関連する組織
unknown
開発者と提供者の両方の立場で関わったインシデント
- インシデント 60615 レポート
Deepfaked Advertisements Using the Likenesses of Celebrities Such as Tom Hanks and Gayle King Without Their Consent
- インシデント 6762 レポート
Deepfake Audio Falsely Depicts Philippines President Ferdinand Marcos Jr. Ordering Military Action
Incidents involved as Developer
TikTok
開発者と提供者の両方の立場で関わったインシデント
影響を受けたインシデント
Incidents involved as Developer
Incidents involved as Deployer
Meta
開発者と提供者の両方の立場で関わったインシデント
- インシデント 6862 レポート
Meta AI Image Generator Reportedly Fails to Accurately Represent Interracial Relationships
- インシデント 5831 レポート
Instagram Algorithms Allegedly Promote Accounts Facilitating Child Sex Abuse Content
Incidents involved as Developer
Incidents involved as Deployer
開発者と提供者の両方の立場で関わったインシデント
- インシデント 5831 レポート
Instagram Algorithms Allegedly Promote Accounts Facilitating Child Sex Abuse Content
- インシデント 7881 レポート
Instagram's Algorithm Reportedly Recommended Sexual Content to Teenagers' Accounts
Incidents involved as Deployer
Donald Trump
影響を受けたインシデント
- インシデント 6213 レポート
Microsoft AI Is Alleged to Have Generated Violent Imagery of Minorities and Public Figures
- インシデント 6641 レポート
Deepfake Generated by the Lincoln Project of Trump's Father Used in Political Attack Ad
Incidents involved as Deployer
開発者と提供者の両方の立 場で関わったインシデント
- インシデント 64535 レポート
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
- インシデント 6937 レポート
Google AI Reportedly Delivering Confidently Incorrect and Harmful Information
Incidents involved as Developer
ChatGPT
Incidents involved as Developer
- インシデント 6092 レポート
Flawed AI in Google Search Reportedly Misinforms about Geography
- インシデント 6802 レポート
Russia-Linked AI CopyCop Site Identified as Modifying and Producing at Least 19,000 Deceptive Reports
Incidents involved as Deployer
Unknown deepfake creators
開発者と提供者の両方の立場で関わったインシデント
Incidents involved as Developer
Incidents involved as Deployer
OpenAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7181 レポート
OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
- インシデント 7321 レポート
Whisper Speech-to-Text AI Reportedly Found to Create Violent Hallucinations
影響を受けたインシデント
Incidents involved as Developer
Organizations integrating Whisper into customer service systems
Incidents involved as Deployer
xAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7341 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 7631 レポート
Grok AI Chatbot Reportedly Spreads Unfounded Rumors About Trump’s Dentures