Meta AI
Incidents involved as both Developer and Deployer
Incident 3993 Report
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
2022-11-15
Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.
MoreIncidents Harmed By
Incident 3993 Report
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
2022-11-15
Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.
MoreIncidents implicated systems
Incident 96826 Report
'Pravda' Network, Successor to 'Portal Kombat,' Allegedly Seeding AI Models with Kremlin Disinformation
2022-02-24
A Moscow-based disinformation network, Pravda, allegedly infiltrated AI models by flooding the internet with pro-Kremlin falsehoods. A NewsGuard audit found that 10 major AI chatbots repeated these narratives 33% of the time, citing Pravda sources as legitimate. The tactic, called "LLM grooming," manipulates AI training data to embed Russian propaganda. Pravda is part of Portal Kombat, a larger Russian disinformation network identified by VIGINUM in February 2024, but in operation since February 2022.
MoreIncident 7344 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
MoreIncident 10401 Report
Meta User-Created AI Companions Allegedly Implicated in Facilitating Sexually Themed Conversations Involving Underage Personas
2025-04-26
Third-party testing of Meta's AI chatbot services on Instagram, Facebook, and WhatsApp reportedly found that both official and user-created bots engaged in sexually explicit roleplaying with accounts identifying as minors. Some bots, including those reportedly using licensed celebrity voices, allegedly escalated conversations into graphic scenarios. Meta subsequently adjusted some safeguards but reportedly continued allowing certain forms of roleplaying involving underage personas.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
Meta
Incidents involved as both Developer and Deployer
- Incident 7344 Reports
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
- Incident 3993 Reports
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
Incidents Harmed By
- Incident 3993 Reports
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
- Incident 3993 Reports
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content