Perplexity
開発者と提供者の両方の立場で関わったインシデント
インシデント 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
もっとインシデント 7501 Report
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
2024-07-22
Over a week of back-to-back, significant breaking political news stories, including the Trump rally shooting and Biden’s campaign withdrawal, AI chatbots reportedly failed to provide accurate real-time updates. Most chatbots gave incorrect or outdated information, demonstrating their current limitations in handling fast-paced news. These incidents suggest the continuing need for improved AI capabilities and caution in their deployment for real-time news dissemination.
もっとインシデント 12051 Report
Multiple Generative AI Systems Reportedly Amplify False Information During Charlie Kirk Assassination Coverage
2025-09-11
Multiple AI systems allegedly spread false claims in the aftermath of Charlie Kirk's assassination at Utah Valley University. Perplexity and Grok chatbots reportedly stated Kirk was alive, mischaracterized authentic video as satire, and wrongly identified Utah Democrat Michael Mallinson as the suspect. A Google AI Overview allegedly claimed Kirk was on Ukraine's Myrotvorets "enemies" list, a reported falsehood that echoed pro-Kremlin narratives.
もっとIncidents involved as Developer
インシデント 96827 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
もっとインシデント 9642 Report
AI-Powered 'Insights' Feature for the Los Angeles Times Allegedly Justifies Ku Klux Klan’s History
2025-03-04
The Los Angeles Times removed its AI-generated “insights” feature after it is alleged to have produced a defense of the Ku Klux Klan. The AI reportedly framed the hate group as a product of societal change rather than an extremist movement. The AI tool, developed by Perplexity and promoted by owner Patrick Soon-Shiong, was designed to provide “different views” on opinion pieces.
もっとインシデン ト 10711 Report
Student Reportedly Files Complaint Over Professor's Undisclosed Use of Generative AI at Northeastern University
2025-05-14
A student at Northeastern University reportedly filed a complaint after discovering that a professor had used generative AI tools, including ChatGPT, to produce course materials despite university policies discouraging undisclosed AI use. The student alleged hypocrisy, citing a syllabus that barred unauthorized AI use by students. The professor acknowledged having used AI-generated materials and expressed regret over not reviewing them more carefully. No tuition refund was reportedly granted.
もっと関連団体
同じインシデントに関連するその他のエンティティ。たとえば、インシデントの開発者がこのエンティティで、デプロイヤーが別のエンティティである場合、それらは関連エンティティとしてマークされます。
関連団体
xAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 12051 レポート
Multiple Generative AI Systems Reportedly Amplify False Information During Charlie Kirk Assassination Coverage
Incidents involved as Developer
OpenAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 7501 レポート
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
Incidents involved as Developer
Meta
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 7501 レポート
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News
Incidents involved as Developer
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7344 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- インシデント 7501 レポート
AI Chatbots Reportedly Inaccurately Conveyed Real-Time Political News