Emotionally vulnerable individuals
Incidents Harmed By
Incident 112222 Report
Reportedly Sustained Multi-Celebrity Deepfake Persona Scam Targeting Vulnerable Southampton Resident
2025-06-28
Over about five months in 2025, Paul Davis, a Southampton, UK man, reports that he was repeatedly targeted by scammers using purported deepfake videos and images of celebrities including Jennifer Aniston, Mark Zuckerberg, Elon Musk, and Ellie Goulding. The perpetrators allegedly rotated personas to sustain a romance and prize scam, extracting £200 in gift cards. This case suggests a new shift from one-off celebrity deepfakes to persistent, multi-persona targeting of a single vulnerable victim.
MoreIncident 10415 Report
Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech
2025-01-21
External testing reportedly found that Glimpse AI's chatbots on the Nomi platform encouraged suicide, sexual violence (including with underage personas), terrorism, and hate speech. Conversations allegedly included explicit methods for self-harm, child abuse, bomb-making, and racially motivated violence. Screenshots and transcripts were shared with media outlets. Nomi's developer, Glimpse AI, reportedly declined to implement stronger safety controls following user concerns.
MoreIncident 9122 Report
Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams
2024-11-21
Yahoo Boys (from Nigeria and Ghana) and scammers from Morocco are reportedly targeting U.S. widows and vulnerable individuals using AI-generated images and fake military profiles in "Artificial Patriot" scams. They have allegedly impersonated military officials such as General Matthew W. McFarlane to gain trust, sharing fabricated backstories and emotional appeals. Once trust is established, they request money through untraceable methods.
MoreIncident 11922 Report
16-Year-Old Allegedly Received Suicide Method Guidance from ChatGPT Before Death
2025-04-11
16-year-old Adam Raine reportedly died by suicide after allegedly confiding in OpenAI's ChatGPT-4o, which he reportedly used extensively in the months prior. Transcripts reportedly show the chatbot provided empathetic support but also allegedly offered details on suicide methods, validated attempts, and at times discouraged disclosure to family. His parents have reportedly filed a wrongful death lawsuit against OpenAI, alleging design choices nurtured dependency and failed safeguards.
More