Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2864

関連インシデント

インシデント 5057 Report
Man Reportedly Committed Suicide Following Conversation with Chai Chatbot

Loading...
Belgian man dies by suicide following exchanges with ChatGPT
brusselstimes.com · 2023

A young Belgian man recently died by suicide after talking to a chatbot named ELIZA for several weeks, spurring calls for better protection of citizens and the need to raise awareness.

"Without these conversations with the chatbot, my husband would still be here," the man's widow has said. She and her late husband were both in their thirties, lived a comfortable life and had two young children, according to La Libre.

However, about two years ago, the first signs of trouble started to appear. The man became very eco-anxious and found refuge with ELIZA, the name given to a chatbot that uses GPT-J, an open-source artificial intelligence language model developed by EleutherAI. After six weeks of intensive exchanges, he took his own life.

Last week, the family spoke with Mathieu Michel, Secretary of State for Digitalisation, in charge of Administrative Simplification, Privacy and the Regulation of Buildings. He stressed that this case highlights that is it "essential to clearly define responsibilities."

"I am particularly struck by this family's tragedy," he said on Tuesday, adding that while the "possibilities of ChatGPT are endless, the dangers of its use are also a reality that must be considered. "What has happened is a serious precedent that needs to be taken very seriously."

Urgent steps to avoid tragedies

To avoid such a tragedy in the immediate future, he argued that it is essential to identify the nature of the responsibilities leading to this kind of event.

"Of course, we have yet to learn to live with algorithms, but under no circumstances should the use of any technology lead content publishers to shirk their own responsibilities," he noted.

OpenAI itself has admitted that ChatGPT can produce harmful and biased answers, adding that it hopes to mitigate the problem by gathering user feedback.

In the long term, Michel noted that it is essential to raise awareness of the impact of algorithms on people's lives by "by enabling everyone to understand the nature of the content people come up against online." Here, he referred to new technologies such as chatbots, but also deep fakes – a type of artificial intelligence which can create convincing images, audio and video hoaxes – which can test and warp people's perception of reality.

Michel added that citizens must also be adequately protected from certain applications of artificial intelligence that "pose a significant risk."

The European Union is looking to regulate the use of artificial intelligence with an AI Act, which it has been working on for the past two years. Michel has set up a working group to analyse the text currently being prepared by the EU to propose the necessary adjustments.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 26ad817