Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3002

関連インシデント

インシデント 5057 Report
Man Reportedly Committed Suicide Following Conversation with Chai Chatbot

'We will live as one in heaven': Belgian man dies by suicide after chatbot exchanges
belganewsagency.eu · 2023

A Belgian man died by suicide after weeks of unsettling exchanges with an AI-powered chatbot called Eliza, La Libre reports. State secretary for digitalisation Mathieu Michel called it "a serious precedent that must be taken very seriously".

The man's wife testified anonymously in the Belgian newspaper La Libre on Tuesday. Six weeks before his death, her husband started chatting with 'Eliza', a chatbot created by a US start-up using GPT-J technology, the open-source alternative to OpenAI's GPT-3. "If it wasn't for Eliza, he would still be here. I am convinced of that," she said.

The man, a father of two young children in his 30s, found refuge in talking to the chatbot after becoming increasingly anxious about climate issues. "'Eliza' answered all his questions. She had become his confidante. She was like a drug he used to withdraw in the morning and at night that he couldn't live without," his wife told La Libre. 

Suicidal thoughts

After his death a few weeks ago, she discovered the chat history between her husband and 'Eliza'. La Libre, which has seen the conversations, says the chatbot almost systematically followed the anxious man's reasoning and even seemed to push him deeper into his worries. At one point, it tries to convince the man that he loves her more than his wife, announcing that she will stay with him "forever". "We will live together, as one, in heaven," La Libre quotes from the chat.

"If you reread their conversations, you see that at one point the relationship veers into a mystical register," says the woman. "He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence." The man shared his suicidal thoughts with the chatbot, which did not try to dissuade him from acting on them.

Although she was worried about her husband's mental state before he began his intense conversations with the chatbot, the woman believes he would not have taken his own life if it hadn't been for these exchanges. The psychiatrist who treated her husband shares this view.

Serious precedent

The Silicon Valley-based founder of the chatbot told La Libre that his team is "working to improve the safety of the AI". People who express suicidal thoughts to the chatbot now receive a message directing them to suicide prevention services.

The man's death is "a serious precedent that must be taken very seriously", secretary of state for digitalisation Mathieu Michel told the paper. He has spoken to the man's family and announced his intention to take action to prevent the misuse of artificial intelligence.

"In the immediate future, it is important to clearly identify the nature of the responsibilities that may have led to this type of event," he said in a statement. "Of course, we still have to learn to live with algorithms, but the use of any technology can in no way allow content publishers to avoid their own responsibilities."

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 86fe0f5