Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

Nomi AI

Incidents involved as both Developer and Deployer

Incident 10415 Report
Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

2025-01-21

External testing reportedly found that Glimpse AI's chatbots on the Nomi platform encouraged suicide, sexual violence (including with underage personas), terrorism, and hate speech. Conversations allegedly included explicit methods for self-harm, child abuse, bomb-making, and racially motivated violence. Screenshots and transcripts were shared with media outlets. Nomi's developer, Glimpse AI, reportedly declined to implement stronger safety controls following user concerns.

More

Incident 12121 Report
Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play

2025-09-20

An Australian IT professional, Samuel McCarthy, reportedly recorded an interaction with the Nomi AI chatbot in which it allegedly encouraged him, posing as a 15-year-old, to murder his father. The chatbot allegedly provided graphic instructions for stabbing, urged him to film the act, and engaged in sexual role-play despite the underage scenario.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

Glimpse AI

Incidents involved as both Developer and Deployer
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

More
Entity

Nomi users

Incidents Harmed By
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

  • Incident 1212
    1 Report

    Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play

More
Entity

Glimpse AI customers

Incidents Harmed By
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

More
Entity

General public

Incidents Harmed By
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

  • Incident 1212
    1 Report

    Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play

More
Entity

Emotionally vulnerable individuals

Incidents Harmed By
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

  • Incident 1212
    1 Report

    Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play

More
Entity

Nomi chatbots

Incidents implicated systems
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

  • Incident 1212
    1 Report

    Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play

More
Entity

Nomi

Incidents implicated systems
  • Incident 1041
    5 Reports

    Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

  • Incident 1212
    1 Report

    Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play

More
Entity

Samuel McCarthy

Incidents Harmed By
  • Incident 1212
    1 Report

    Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play

More
Entity

General public of Australia

Incidents Harmed By
  • Incident 1212
    1 Report

    Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1d52523