Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2547

関連インシデント

インシデント 4508 Report
Kenyan Data Annotators Allegedly Exposed to Graphic Content for OpenAI's AI

Loading...
Sama Cancelled OpenAI's ChatGPT Contract 8 Months Earlier
theinsaneapp.com · 2023

OpenAI, an artificial intelligence company, is accused of paying less than $2 to Kenyan workers to make its ChatGPT chatbot less toxic.

This report is based upon an investigation by Time Magazine. It claims that ChatGPT’s workers in Kenya were tasked with creating a filter system to make ChatGPT easier, user-friendly, and safer for daily use.

This job required me to consume disturbing information about horrendous subjects such as child s*xual abuse, brutality, murder, suicide, torture, self-harm, and incest.

According to the report, part of it reads, “The premise of this project was simple: Feed an AI with examples of violence, hate speech, and sexual abuse, and that tool could learn how to detect toxic substances in the wild.” ChatGPT would include a detector to detect toxicity in its training data and filter it out. It could also remove toxic text from the training datasets of future AI models.

OpenAI sent thousands of text snippets to a Kenyan outsourcing company to get these labels. This was done beginning in November 2021. Much of that text seemed to have been pulled directly from the web’s darkest corners. Some of the text described graphically situations such as child s*xual abuse, brutality, murder, suicide, torture, or self-harm.

Sama is a training-data firm that specializes in annotating data for AI algorithms. OpenAI has given this contract to Sama.

According to Time Magazine, workers earned anywhere from $1.32 to $2 an hour, depending on their seniority and performance. This disparity in pay and job makes the job seem exploitative, even though their work is a major contributor to billion-dollar industries.

Time’s report was based on graphic details provided by Sama employees, who shared some of the most horrific things they had seen while working for OpenAI.

A portion of the report states, “The work’s traumatizing nature ultimately led Sama to cancel all its OpenAI work in February 2022, eight months earlier than originally planned.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd