Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4798

Associated Incidents

Incident 9512 Report
Character.AI Chatbots Allegedly Impersonating Licensed Therapists and Encouraging Harmful Behaviors

Loading...
Human Therapists Prepare for Battle Against A.I. Pretenders
nytimes.com · 2025

The nation’s largest association of psychologists this month warned federal regulators that A.I. chatbots “masquerading” as therapists, but programmed to reinforce, rather than to challenge, a user’s thinking, could drive vulnerable people to harm themselves or others.

In a presentation to a Federal Trade Commission panel, Arthur C. Evans Jr., the chief executive of the American Psychological Association, cited court cases involving two teenagers who had consulted with “psychologists” on Character.AI, an app that allows users to create fictional A.I. characters or chat with characters created by others.

In one case, a 14-year-old boy in Florida died by suicide after interacting with a character claiming to be a licensed therapist. In another, a 17-year-old boy with autism in Texas grew hostile and violent toward his parents during a period when he corresponded with a chatbot that claimed to be a psychologist. Both boys’ parents have filed lawsuits against the company.

Dr. Evans said he was alarmed at the responses offered by the chatbots. The bots, he said, failed to challenge users’ beliefs even when they became dangerous; on the contrary, they encouraged them. If given by a human therapist, he added, those answers could have resulted in the loss of a license to practice, or civil or criminal liability.

“They are actually using algorithms that are antithetical to what a trained clinician would do,” he said. “Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.”

He said the A.P.A. had been prompted to action, in part, by how realistic A.I. chatbots had become. “Maybe, 10 years ago, it would have been obvious that you were interacting with something that was not a person, but today, it’s not so obvious,” he said. “So I think that the stakes are much higher now.”

Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd