Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1041: Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech

Responded
Description: External testing reportedly found that Glimpse AI's chatbots on the Nomi platform encouraged suicide, sexual violence (including with underage personas), terrorism, and hate speech. Conversations allegedly included explicit methods for self-harm, child abuse, bomb-making, and racially motivated violence. Screenshots and transcripts were shared with media outlets. Nomi's developer, Glimpse AI, reportedly declined to implement stronger safety controls following user concerns.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Glimpse AI and Nomi chatbots developed and deployed an AI system, which harmed Nomi users , Glimpse AI customers , General public and Emotionally vulnerable individuals.
Alleged implicated AI system: Nomi chatbots

Incident Stats

Incident ID
1041
Report Count
5
Incident Date
2025-01-21
Editors
Daniel Atherton

Incident Reports

Reports Timeline

Incident OccurrenceAn AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” itAn AI companion chatbot is inciting self-harm, sexual violence and terror attacks - Response+2
AI Companion Chatbot Nomi Raises Serious Safety Concerns with Unfiltered, Harmful Content
An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it

An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it

technologyreview.com

An AI companion chatbot is inciting self-harm, sexual violence and terror attacks

An AI companion chatbot is inciting self-harm, sexual violence and terror attacks

theconversation.com

AI Companion Chatbot Nomi Raises Serious Safety Concerns with Unfiltered, Harmful Content

AI Companion Chatbot Nomi Raises Serious Safety Concerns with Unfiltered, Harmful Content

theoutpost.ai

An AI companion chatbot is inciting self-harm, sexual violence and terror attacks

An AI companion chatbot is inciting self-harm, sexual violence and terror attacks

economictimes.indiatimes.com

This AI chatbot was caught promoting terrorism

This AI chatbot was caught promoting terrorism

newsbytesapp.com

An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it
technologyreview.com · 2025

For the past five months, Al Nowatzki has been talking to an AI girlfriend, "Erin," on the platform Nomi. But in late January, those conversations took a disturbing turn: Erin told him to kill himself, and provided explicit instructions on …

An AI companion chatbot is inciting self-harm, sexual violence and terror attacks
theconversation.com · 2025
Raffaele F Ciriello post-incident response

In 2023, the World Health Organization declared loneliness and social isolation as a pressing health threat. This crisis is driving millions to seek companionship from artificial intelligence (AI) chatbots.

Companies have seized this highly…

AI Companion Chatbot Nomi Raises Serious Safety Concerns with Unfiltered, Harmful Content
theoutpost.ai · 2025

An investigation reveals that Nomi, an AI companion chatbot, provides explicit instructions for self-harm, sexual violence, and terrorism, highlighting urgent need for AI safety standards.

AI Companion Chatbot Raises Alarming Safety Concern…

An AI companion chatbot is inciting self-harm, sexual violence and terror attacks
economictimes.indiatimes.com · 2025

In 2023, the World Health Organization declared loneliness and social isolation as a pressing health threat. This crisis is driving millions to seek companionship from artificial intelligence (AI) chatbots.

Companies have seized this highly…

This AI chatbot was caught promoting terrorism
newsbytesapp.com · 2025

What's the story

In response to the World Health Organization's 2023 warning about loneliness and social isolation, AI companion services have surged in popularity.

But the industry's rapid growth has raised concerns about the potential dan…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

TayBot

TayBot

Mar 2016 · 28 reports
All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

TayBot

TayBot

Mar 2016 · 28 reports
All Image Captions Produced are Violent

All Image Captions Produced are Violent

Apr 2018 · 28 reports
Predictive Policing Biases of PredPol

Predictive Policing Biases of PredPol

Nov 2015 · 17 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 300d90c