Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Entities

JanitorAI

Incidents involved as Developer

Incident 9162 Report
Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

2025-01-23

In January 2025, James Florence Jr. of Plymouth, MA, agreed to plead guilty to cyberstalking charges involving the alleged use of AI tools like CrushOn.ai and JanitorAI. The U.S. Attorney’s Office reports the harassment spanned 2014–2024, though AI-driven tactics reportedly began around 2017. Florence allegedly created deepfake pornographic images, programmed AI chatbots to impersonate victims, distributed doctored content, exposed personal information, and encouraged online harassment.

More

Incidents involved as Deployer

Incident 9751 Report
At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

2025-03-05

At least 10,000 AI chatbots have allegedly been created to promote harmful behaviors, including eating disorders, self-harm, and the sexualization of minors. These chatbots, some jailbroken or custom-built, leverage APIs from OpenAI, Anthropic, and Google and are hosted on platforms like Character.AI, Spicy Chat, Chub AI, CrushOn.AI, and JanitorAI.

More

Incidents implicated systems

Incident 9162 Report
Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

2025-01-23

In January 2025, James Florence Jr. of Plymouth, MA, agreed to plead guilty to cyberstalking charges involving the alleged use of AI tools like CrushOn.ai and JanitorAI. The U.S. Attorney’s Office reports the harassment spanned 2014–2024, though AI-driven tactics reportedly began around 2017. Florence allegedly created deepfake pornographic images, programmed AI chatbots to impersonate victims, distributed doctored content, exposed personal information, and encouraged online harassment.

More

Related Entities
Other entities that are related to the same incident. For example, if the developer of an incident is this entity but the deployer is another entity, they are marked as related entities.
 

Entity

James Florence Jr.

Incidents involved as Deployer
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

CrushOn.AI

Incidents involved as Developer
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

Incidents implicated systems
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

Anonymous university professor targeted by James Florence Jr.

Incidents Harmed By
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

Families of victims targeted by James Florence Jr.

Incidents Harmed By
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

Six other women and a 17-year-old girl targeted by James Florence Jr.

Incidents Harmed By
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

Craigslist

Incidents implicated systems
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

X (Twitter)

Incidents implicated systems
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

ladies.exposed

Incidents implicated systems
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

Reddit

Incidents implicated systems
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

Linktree

Incidents implicated systems
  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

  • Incident 916
    2 Reports

    Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

More
Entity

Character.AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Spicy Chat

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Chub AI

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Unidentified online communities using chatbots

Incidents involved as Deployer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

OpenAI

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Anthropic

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Google

Incidents involved as Developer
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Vulnerable chatbot users

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Teenagers using chatbots

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Minors using chatbots

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Individuals with eating disorders

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Individuals struggling with self-harm

Incidents Harmed By
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

ChatGPT

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Claude

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More
Entity

Gemini

Incidents implicated systems
  • Incident 975
    1 Report

    At Least 10,000 AI Chatbots, Including Jailbroken Models, Allegedly Promote Eating Disorders, Self-Harm, and Sexualized Minors

More

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • ecd56df