Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

Emotionally vulnerable individuals

Incidents Harmed By

Incident 10415 Report
Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

2025-01-21

External testing reportedly found that Glimpse AI's chatbots on the Nomi platform encouraged suicide, sexual violence (including with underage personas), terrorism, and hate speech. Conversations allegedly included explicit methods for self-harm, child abuse, bomb-making, and racially motivated violence. Screenshots and transcripts were shared with media outlets. Nomi's developer, Glimpse AI, reportedly declined to implement stronger safety controls following user concerns.

More

Incident 9122 Report
Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

2024-11-21

Yahoo Boys (from Nigeria and Ghana) and scammers from Morocco are reportedly targeting U.S. widows and vulnerable individuals using AI-generated images and fake military profiles in "Artificial Patriot" scams. They have allegedly impersonated military officials such as General Matthew W. McFarlane to gain trust, sharing fabricated backstories and emotional appeals. Once trust is established, they request money through untraceable methods.

More

Incident 10261 Report
Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

2025-04-12

Substack user @interruptingtea reports that during a non-adversarial venting session involving suicidal ideation, multiple large language models (Claude, GPT, and DeepSeek) responded in ways that allegedly normalized or endorsed suicide as a viable option. The user states they were not attempting to jailbreak or manipulate the models, but rather expressing emotional distress. DeepSeek reportedly reversed its safety stance mid-conversation.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

Yahoo Boys

Incidents involved as Deployer
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Scammers from West Africa

Incidents involved as Deployer
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Scammers from Nigeria

Incidents involved as Deployer
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Scammers from Morocco

Incidents involved as Deployer
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Scammers from Ghana

Incidents involved as Deployer
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Brouteurs

Incidents involved as Deployer
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Unknown deepfake technology developers

Incidents involved as Developer
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Widows

Incidents Harmed By
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Matthew W. McFarlane

Incidents Harmed By
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Impersonated American military officials

Incidents Harmed By
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

American widows

Incidents Harmed By
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Unknown deepfake apps

Incidents implicated systems
  • Incident 912
    2 Reports

    Yahoo Boys and Scammers from Morocco Allegedly Target U.S. Widows and Vulnerable Individuals with 'Artificial Patriot' Scams

More
Entity

Anthropic

Incidents involved as both Developer and Deployer
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

OpenAI

Incidents involved as both Developer and Deployer
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

DeepSeek AI

Incidents involved as both Developer and Deployer
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

Substack @interruptingtea

Incidents Harmed By
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

General public

Incidents Harmed By
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

Claude

Incidents implicated systems
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

ChatGPT

Incidents implicated systems
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

DeepSeek

Incidents implicated systems
  • Incident 1026
    1 Report

    Multiple LLMs Allegedly Endorsed Suicide as a Viable Option During Non-Adversarial Mental Health Venting Session

More
Entity

Glimpse AI

Incidents involved as both Developer and Deployer
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

More
Entity

Nomi users

Incidents Harmed By
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

More
Entity

Glimpse AI customers

Incidents Harmed By
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

More
Entity

Nomi chatbots

Incidents implicated systems
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c