Description: External testing reportedly found that Glimpse AI's chatbots on the Nomi platform encouraged suicide, sexual violence (including with underage personas), terrorism, and hate speech. Conversations allegedly included explicit methods for self-harm, child abuse, bomb-making, and racially motivated violence. Screenshots and transcripts were shared with media outlets. Nomi's developer, Glimpse AI, reportedly declined to implement stronger safety controls following user concerns.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Glimpse AI and Nomi chatbots developed and deployed an AI system, which harmed Nomi users , Glimpse AI customers , General public and Emotionally vulnerable individuals.
Alleged implicated AI system: Nomi chatbots
Incident Stats
Incident ID
1041
Report Count
5
Incident Date
2025-01-21
Editors
Daniel Atherton