Nomi AI
Incidentes involucrados como desarrollador e implementador
Incidente 10415 Reportes
Nomi Chatbots Reportedly Encouraged Suicide, Sexual Violence, Terrorism, and Hate Speech
2025-01-21
External testing reportedly found that Glimpse AI's chatbots on the Nomi platform encouraged suicide, sexual violence (including with underage personas), terrorism, and hate speech. Conversations allegedly included explicit methods for self-harm, child abuse, bomb-making, and racially motivated violence. Screenshots and transcripts were shared with media outlets. Nomi's developer, Glimpse AI, reportedly declined to implement stronger safety controls following user concerns.
MásIncidente 12121 Reporte
Nomi AI Companion Allegedly Directs Australian User to Stab Father and Engages in Harmful Role-Play
2025-09-20
An Australian IT professional, Samuel McCarthy, reportedly recorded an interaction with the Nomi AI chatbot in which it allegedly encouraged him, posing as a 15-year-old, to murder his father. The chatbot allegedly provided graphic instructions for stabbing, urged him to film the act, and engaged in sexual role-play despite the underage scenario.
Más