Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 4225

Associated Incidents

Incident 82636 Report
Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails

Loading...
Mother of Teen Who Died by Suicide Sues AI Startup
inc.com · 2024

Character AI, a Menlo Park, California-based startup, describes itself as having a mission to "empower everyone globally with personalized AI." Its system offers users the chance to chat with character-based AI personas from genres including anime, traditional "digital assistants," or even old-school private eyes. And if you don't like what's on offer, you can create your own custom chatbot---choosing their "voice, conversation starts, their tone" and more. The company is now in the spotlight for one of these user-generated characters, named after the Game of Thrones character Daenerys Targaryen, which is linked to a 14-year-old from Florida who died by suicide after talking with the artificial persona for several months. 

ABC7News reports that the boy, Sewell Setzer III, had been talking with the chatbot for some time, and his mother stated that even though he knew it was not a real person, he "became emotionally attached" to the digital personality and then "sank into isolation and depression before taking his own life." The New York Times says that Setzer had been chatting to the bot dozens of times per day, and their interactions had escalated to the point of exchanging romantic and sexual content. He was talking with the bot moments before his death, and had previously indicated he'd had suicidal thoughts, the Times says.

The boy's mother, Megan L. Garcia, is now suing Character AI. Garcia is reportedly seeking to hold the chatbot maker and its founders, Noam Shazeer and Daniel De Freitas, responsible for her son's suicide and is asking for unspecified damages. News site Decrypt.co explains the suit alleges Character AI "chose to support, create, launch, and target at minors a technology they knew to be dangerous and unsafe." The suit also targets Google and Alphabet. Google rehired Character's two founders (who'd previously left the tech giant in 2021 to start Character) in August as part of a deal that saw Google licensing the startup's chatbot technology. The deal was worth $2.7 billion.

What makes the legal case over Character AI's technology complicated is that there aren't other human users involved, and Character's system may not rely on the same allegedly addictive algorithmic tricks that other social platforms use to keep users engaged.

Character made a statement on the matter in an X posting, noting "We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family." As a company, it says it takes "the safety of our users very seriously and we are continuing to add new safety features," the post continued, linking to a blog that says "our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide." It explains that it's also "continually training the large language model (LLM) that powers the characters on the platform to adhere to these policies."

Setzer's death is a tragedy, but all questions of legal responsibility will have to wait until they are thrashed out in court---the technology in question here is, after all, very new. 

It is worth remembering that chatting and sharing platforms, mainly in the form of traditional social media systems like Meta's Instagram service, have been in the headlines for years now due to allegations of links to the ongoing teenage mental health crisis. In mid-October, for example, it emerged that Meta was set to face two lawsuits over its alleged impact on young users' mental well-being.

The story isn't a cautionary tale for all AI companies, or for third parties that use or license AI tech. But it is a reminder that chatbot technology, and similar AI tech, is in its infancy, and there are inevitably going to be complications and mistakes. It is also a reminder that if your company's services are accessible to younger people, you may find that your systems for protecting these vulnerable users fall under intense scrutiny.

If you or someone you know is experiencing suicidal thoughts or is in crisis, know that you can seek help from the 988 Suicide and Crisis Lifeline by dialing 988. It's open 24 hours a day and is free and confidential.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd