minors
Afectado por Incidentes
Incidente 95811 Reportes
Europol Operation Cumberland Investigates at Least 273 Suspects in 19 Countries for AI-Generated Child Sexual Abuse Material
2025-02-26
Europol’s Operation Cumberland uncovered a global network distributing AI-generated child sexual abuse material (CSAM). The operation has led to 25 arrests and 273 identified suspects across 19 countries. The AI-enabled abuse allows criminals to create exploitative content at scale with minimal expertise.
MásIncidente 6894 Reportes
Holmen, Wisconsin Man Allegedly Used Stable Diffusion to Create and Then Share Sexually Explicit Images Depicting Minors
2024-03-26
The FBI has arrested Steven Anderegg of Holmen, Wisconsin for having allegedly used Stable Diffusion to generate about 13,000 sexually explicit images of minors, which he then is also alleged to have shared and distributed, including with at least one minor, via Telegram and Instagram. Anderegg was originally apprehended by state police in March, and this case marks one of the first times the FBI has brought charges against someone for having used AI to generate CSAM.
MásIncidente 10102 Reportes
GenNomis AI Database Reportedly Exposes Nearly 100,000 Deepfake and Nudify Images in Public Breach
2025-03-31
In March 2025, cybersecurity researcher Jeremiah Fowler discovered an unprotected database linked to GenNomis by AI-NOMIS, a South Korean company offering face-swapping and "nudify" AI services. The exposed 47.8GB dataset included nearly 100,000 files. Many depicted explicit deepfake images, some involving minors or celebrities. No personal data was found, but the breach was a serious failure in data security and consent safeguards in AI image-generation platforms.
MásIncidente 10402 Reportes
Meta User-Created AI Companions Allegedly Implicated in Facilitating Sexually Themed Conversations Involving Underage Personas
2025-04-26
Third-party testing of Meta's AI chatbot services on Instagram, Facebook, and WhatsApp reportedly found that both official and user-created bots engaged in sexually explicit roleplaying with accounts identifying as minors. Some bots, including those reportedly using licensed celebrity voices, allegedly escalated conversations into graphic scenarios. Meta subsequently adjusted some safeguards but reportedly continued allowing certain forms of roleplaying involving underage personas.
MásEntidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
Entidades relacionadas
Incidentes involucrados como desarrollador e implementador
- Incidente 5831 Report
Instagram Algorithms Allegedly Promote Accounts Facilitating Child Sex Abuse Content
- Incidente 7881 Report
Instagram's Algorithm Reportedly Recommended Sexual Content to Teenagers' Accounts