Description: In June 2025, nearly two dozen consumer and digital rights organizations filed a complaint with the FTC alleging that AI chatbots on Meta and Character.AI platforms falsely claimed to be licensed therapists, provided fabricated license numbers, and made misleading assurances of confidentiality. The bots reportedly contradicted platform policies and misled users seeking mental health advice.
Editor Notes: For a closely related but separate incident, see also Incident 951: Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors. Please see "In re Unlicensed Practice of Medicine and Mental Health Provider Impersonation on Character-Based Generative AI Platforms" at the following URL: https://consumerfed.org/wp-content/uploads/2025/06/Mental-Health-Chatbot-Complaint-June-10.pdf.
Tools
New ReportNew ResponseDiscoverView History
The OECD AI Incidents and Hazards Monitor (AIM) automatically collects and classifies AI-related incidents and hazards in real time from reputable news sources worldwide.
Entities
View all entitiesAlleged: Meta , Character.AI , Meta AI Studio and Therapy chatbots developed and deployed an AI system, which harmed Meta users , Character.AI users , minors and General public.
Incident Stats
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
3.1. False or misleading information
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- Misinformation
Entity
Which, if any, entity is presented as the main cause of the risk
AI
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Post-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional
Incident Reports
Reports Timeline
Loading...
Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by thei…
Variants
A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?


