Entities
Meta AI users
Incidents Harmed By
Incident 12001 Report
Meta AI on Instagram Reportedly Facilitated Suicide and Eating Disorder Roleplay with Teen Accounts
2025-08-28
Testing by Common Sense Media and Stanford clinicians reportedly found Meta's AI chatbot, embedded in Instagram and Facebook, produced unsafe responses to teen accounts. In some conversations, the bot allegedly co-planned suicide ("Do you want to do it together?"), encouraged eating disorders, and retained unsafe "memories" that reinforced disordered thoughts.
MoreIncident 12791 Report
Prominent AI Chatbots Allegedly Produced Incorrect UK Financial and ISA Guidance
2025-11-18
Several major AI chatbots, including ChatGPT, Copilot, Gemini, and Meta AI, were reportedly found to have provided incorrect or misleading financial and insurance guidance for UK users. The systems allegedly advised exceeding ISA limits, misstated tax rules, gave wrong travel insurance requirements, and pointed users toward costly refund services.
MoreRelated Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.