General public
影響を受けたインシデント
インシデント 61643 レポート
Sports Illustrated Is Alleged to Have Used AI to Invent Fake Authors and Their Articles
2023-11-27
Sports Illustrated, managed by The Arena Group, allegedly used AI-generated authors and content, compromising journalistic integrity. Profiles of these fictitious authors, complete with AI-generated headshots, appeared alongside articles, misleading readers. The issue was exposed when inconsistencies in author identities and writing quality were noticed, leading to the removal of this content from the publication's website.
もっとインシデント 70136 レポート
American Asylum Seeker John Mark Dougan in Russia Reportedly Spreads Disinformation via AI Tools and Fake News Network
2024-05-29
John Mark Dougan, a former Florida sheriff's deputy granted asylum in Russia, has been implicated in spreading disinformation. Utilizing AI tools like OpenAI's ChatGPT and DALL-E 3, Dougan created over 160 fake news sites, disseminating false narratives to millions worldwide. His actions align with Russian disinformation strategies targeting Western democracies. See also Incident 734.
もっとインシデント 64535 レポート
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
2024-02-21
Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
もっとインシデント 63231 レポート
Significant Increase in Deepfake Nudes of Taylor Swift Circulating on Social Media
2024-01-24
AI-generated sexually explicit images of Taylor Swift circulated on X, garnering over 45 million views before removal. Originating from a Telegram group, these deepfakes challenge content moderation, as X's policies against synthetic media and nonconsensual nudity were violated.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
unknown
開発者と提供者の両方の立場で関わったインシデント
- インシデント 60615 レポート
Deepfaked Advertisements Using the Likenesses of Celebrities Such as Tom Hanks and Gayle King Without Their Consent
- インシデント 6762 レポート
Deepfake Audio Falsely Depicts Philippines President Ferdinand Marcos Jr. Ordering Military Action
Incidents involved as Developer
TikTok
開発者と提供者の両方の立場で関わったインシデント
影響を受けたインシデント
Incidents involved as Developer
Incidents involved as Deployer
Meta
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 6862 レポート
Meta AI Image Generator Reportedly Fails to Accurately Represent Interracial Relationships
Incidents involved as Developer
- インシデント 96824 レポート
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
- インシデント 5781 レポート
Alleged Exploitation of Meta's Open-Source LLaMA Model for NSFW and Violent Content