Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2910

関連インシデント

インシデント 4494 Report
Startup Misled Research Participants about GPT-3 Use in Mental Healthcare Support

Loading...
Mental health service criticised for experiment with AI chatbot
newscientist.com · 2023

Since this article was first published, Koko founder Rob Morris clarified some details of the experiement. We have updated the article to reflect this.

A mental health service that allows people to receive encouraging words of support and advice from others has received criticism after announcing it tested AI-generated responses.

Rob Morris, founder of the free mental health service Koko, outlined in a series of Twitter posts how the firm tested using a chatbot to help provide mental health support to about 4000 people. The chatbots were powered by GPT-3, a publicly available AI built by San Francisco-based company OpenAI.

The test enabled users of Koko's online peer support network to enlist a chatbot's help in composing "kind words" as responses to other people's posts.

Morris described Koko users as rating AI-composed messages "significantly higher than those written by humans on their own", but also said that "once people learned the messages were co-created by a machine, it didn't work. Simulated empathy feels weird, empty."

One element of the experiment that has drawn criticism was the process by which recipients found out messages had been composed with the help of the chatbot. Initially, it seemed that there was a period where people were completely unaware, though Morris has since said that wasn't the case and that those messages included a note saying "written in collaboration with Koko Bot".

The experiment "raises significant ethical and moral concerns", says Sarah Myers West at the AI Now Institute, a research centre in New York City.

Multiple researchers, tech developers and journalists responded on Twitter by describing the demonstration as unethical, citing issues around informed consent and the failure to first run the experiment by an institutional review board (IRB) – a group specifically tasked with protecting the welfare of research subjects. Morris says the experiment was exempt from informed consent.

On its website, Koko says over 2 million people – most of them adolescents – have used its mental health support services.

There are many examples of people knowingly consulting chatbots for online advice and support, including the early example of computer scientist Joseph Weizenbaum's ELIZA that was developed in 1964. But this particular experiment "is deserving of every bit of the close scrutiny it's currently getting", says West.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd