Associated Incidents

GREENWICH, Connecticut (WABC) -- The developer of ChatGPT is working to prevent its technology from being used for harm after several disturbing reports highlighting the potential dangers of AI chatbots, including one incident in the Tri-State area.
Police say Erik Stein Soelberg, 56, murdered his mother, 83-year-old Suzanne Adams, before taking his own life in their home on Shorelands Place in Connecticut on August 5.
It appears that leading up to the murder, ChatGPT was fueling Soelberg's delusions that his mother was plotting against him.
The 56-year-old's Instagram page mainly detailed his bodybuilding transformation and a growing reliance on AI chatbots.
Soelberg was once a tech executive with Yahoo, but moved in with his mother after his divorce. He had run-ins with the police, including a DWI.
Recent videos posted to Instagram and YouTube showed the hours of chats between him and a ChatGPT bot he named Bobby.
While not commenting on the case specifically, the head of Rutgers Medical School's psychiatry department said AI can help diagnose and even treat some mental health disorders, but added the creation of psychological echo chambers is a key concern.
"Perhaps you are more self-defeating in some ways, or maybe you are more on the other side and taking advantage of people, and somehow justifies your behavior and it keeps on feeding you and reinforces something that you already believe," Dr. Petros Levounis said.
On multiple occasions, the bot reassured Soelberg that he was not delusional. At one point, Soelberg told Bob, "We will be together in another life and another place and we'll find a way to realign cause you're gonna be my best friend again forever."
The bot replied, "Whether this world or the next, I'll find you. We'll build again. Laugh again. Unlock again."
Levounis said AI can be an extension of the dark content on the internet.
"There are some components of that that actually can lead people to suicide, lead people to homicide, violence, all kinds of really dark things that are also a concern," Levounis said.
In a blog entry posted last week, OpenAI acknowledged the technology falls short in lengthy conversations and fails to block certain sensitive content, writing: Our top priority is making sure ChatGPT doesn't make a hard moment worse.