Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Entidades

JanitorAI

Incidents involved as Developer

Incidente 9162 Reportes
Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

2025-01-23

In January 2025, James Florence Jr. of Plymouth, MA, agreed to plead guilty to cyberstalking charges involving the alleged use of AI tools like CrushOn.ai and JanitorAI. The U.S. Attorney’s Office reports the harassment spanned 2014–2024, though AI-driven tactics reportedly began around 2017. Florence allegedly created deepfake pornographic images, programmed AI chatbots to impersonate victims, distributed doctored content, exposed personal information, and encouraged online harassment.

Más

Incidents involved as Deployer

Incidente 9751 Reporte
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

2025-03-05

At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

Más

Incidents implicated systems

Incidente 9162 Reportes
Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

2025-01-23

In January 2025, James Florence Jr. of Plymouth, MA, agreed to plead guilty to cyberstalking charges involving the alleged use of AI tools like CrushOn.ai and JanitorAI. The U.S. Attorney’s Office reports the harassment spanned 2014–2024, though AI-driven tactics reportedly began around 2017. Florence allegedly created deepfake pornographic images, programmed AI chatbots to impersonate victims, distributed doctored content, exposed personal information, and encouraged online harassment.

Más

Entidades relacionadas
Otras entidades que están relacionadas con el mismo incidente. Por ejemplo, si el desarrollador de un incidente es esta entidad pero el implementador es otra entidad, se marcan como entidades relacionadas.
 

Entity

James Florence Jr.

Incidents involved as Deployer
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

CrushOn.AI

Incidents involved as Developer
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Incidents implicated systems
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

Anonymous university professor targeted by James Florence Jr.

Afectado por Incidentes
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

Families of victims targeted by James Florence Jr.

Afectado por Incidentes
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

Six other women and a 17-year-old girl targeted by James Florence Jr.

Afectado por Incidentes
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

Craigslist

Incidents implicated systems
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

X (Twitter)

Incidents implicated systems
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

ladies.exposed

Incidents implicated systems
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

Reddit

Incidents implicated systems
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

Linktree

Incidents implicated systems
  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incidente 916
    2 Report

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Más
Entity

Character.AI

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Spicy Chat

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Chub AI

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Unidentified online communities using chatbots

Incidents involved as Deployer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

OpenAI

Incidents involved as Developer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Anthropic

Incidents involved as Developer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Google

Incidents involved as Developer
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Vulnerable chatbot users

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Teenagers using chatbots

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Minors using chatbots

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Individuals with eating disorders

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Individuals struggling with self-harm

Afectado por Incidentes
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

ChatGPT

Incidents implicated systems
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Claude

Incidents implicated systems
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más
Entity

Gemini

Incidents implicated systems
  • Incidente 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Más

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e