Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 916: Plymouth, Massachusetts Resident Reportedly Used AI Chatbots CrushOn.ai and JanitorAI to Harass and Intimidate Victims

Description: In January 2025, James Florence Jr. of Plymouth, MA, agreed to plead guilty to cyberstalking charges involving the alleged use of AI tools like CrushOn.ai and JanitorAI. The U.S. Attorney’s Office reports the harassment spanned 2014–2024, though AI-driven tactics reportedly began around 2017. Florence allegedly created deepfake pornographic images, programmed AI chatbots to impersonate victims, distributed doctored content, exposed personal information, and encouraged online harassment.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: CrushOn.AI and JanitorAI developed an AI system deployed by James Florence Jr., which harmed Anonymous university professor targeted by James Florence Jr. , Families of victims targeted by James Florence Jr. and Six other women and a 17-year-old girl targeted by James Florence Jr..
Alleged implicated AI systems: CrushOn.AI , JanitorAI , Craigslist , X (Twitter) , ladies.exposed , Reddit and Linktree

Incident Stats

Incident ID
916
Report Count
2
Incident Date
2025-01-23
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
Plymouth Man Agrees to Plead Guilty to a Decade Long Cyberstalking Campaign Against Multiple Victims and Possession of Child Pornography
A man stalked a professor for six years. Then he used AI chatbots to lure strangers to her home
Plymouth Man Agrees to Plead Guilty to a Decade Long Cyberstalking Campaign Against Multiple Victims and Possession of Child Pornography

Plymouth Man Agrees to Plead Guilty to a Decade Long Cyberstalking Campaign Against Multiple Victims and Possession of Child Pornography

justice.gov

A man stalked a professor for six years. Then he used AI chatbots to lure strangers to her home

A man stalked a professor for six years. Then he used AI chatbots to lure strangers to her home

theguardian.com

Plymouth Man Agrees to Plead Guilty to a Decade Long Cyberstalking Campaign Against Multiple Victims and Possession of Child Pornography
justice.gov · 2025

Defendant allegedly posted digitally altered images of victim to social media accounts and programmed artificial intelligence-driven chatbots to mimic human conversation with other unknown users of social media platforms

BOSTON -- A Plymout…

A man stalked a professor for six years. Then he used AI chatbots to lure strangers to her home
theguardian.com · 2025

A man from Massachusetts has agreed to plead guilty to a seven-year cyberstalking campaign that included using artificial intelligence (AI) chatbots to impersonate a university professor and invite men online to her home address for sex.

Ja…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Wrongful Attempted Arrest for Apple Store Thefts Due to NYPD’s Facial Misidentification

Wrongful Attempted Arrest for Apple Store Thefts Due to NYPD’s Facial Misidentification

Nov 2018 · 5 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Defamation via AutoComplete

Defamation via AutoComplete

Apr 2011 · 28 reports
Wrongful Attempted Arrest for Apple Store Thefts Due to NYPD’s Facial Misidentification

Wrongful Attempted Arrest for Apple Store Thefts Due to NYPD’s Facial Misidentification

Nov 2018 · 5 reports
COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

May 2016 · 22 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e