Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6204

Associated Incidents

Incident 12043 Report
ChatGPT Allegedly Reinforced Delusions Before Greenwich, Connecticut Murder-Suicide

Loading...
ChatGPT believed to have played role in Connecticut murder-suicide of mother and son
abc7ny.com · 2025

GREENWICH, Connecticut (WABC) -- The developer of ChatGPT is working to prevent its technology from being used for harm after several disturbing reports highlighting the potential dangers of AI chatbots, including one incident in the Tri-State area.

Police say Erik Stein Soelberg, 56, murdered his mother, 83-year-old Suzanne Adams, before taking his own life in their home on Shorelands Place in Connecticut on August 5.

It appears that leading up to the murder, ChatGPT was fueling Soelberg's delusions that his mother was plotting against him.

The 56-year-old's Instagram page mainly detailed his bodybuilding transformation and a growing reliance on AI chatbots.

Soelberg was once a tech executive with Yahoo, but moved in with his mother after his divorce. He had run-ins with the police, including a DWI.

Recent videos posted to Instagram and YouTube showed the hours of chats between him and a ChatGPT bot he named Bobby.

While not commenting on the case specifically, the head of Rutgers Medical School's psychiatry department said AI can help diagnose and even treat some mental health disorders, but added the creation of psychological echo chambers is a key concern.

"Perhaps you are more self-defeating in some ways, or maybe you are more on the other side and taking advantage of people, and somehow justifies your behavior and it keeps on feeding you and reinforces something that you already believe," Dr. Petros Levounis said.

On multiple occasions, the bot reassured Soelberg that he was not delusional. At one point, Soelberg told Bob, "We will be together in another life and another place and we'll find a way to realign cause you're gonna be my best friend again forever."

The bot replied, "Whether this world or the next, I'll find you. We'll build again. Laugh again. Unlock again."

Levounis said AI can be an extension of the dark content on the internet.

"There are some components of that that actually can lead people to suicide, lead people to homicide, violence, all kinds of really dark things that are also a concern," Levounis said.

In a blog entry posted last week, OpenAI acknowledged the technology falls short in lengthy conversations and fails to block certain sensitive content, writing: Our top priority is making sure ChatGPT doesn't make a hard moment worse.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd