Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2547

Associated Incidents

Incident 4508 Report
Kenyan Data Annotators Allegedly Exposed to Graphic Content for OpenAI's AI

Loading...
Sama Cancelled OpenAI's ChatGPT Contract 8 Months Earlier
theinsaneapp.com · 2023

OpenAI, an artificial intelligence company, is accused of paying less than $2 to Kenyan workers to make its ChatGPT chatbot less toxic.

This report is based upon an investigation by Time Magazine. It claims that ChatGPT’s workers in Kenya were tasked with creating a filter system to make ChatGPT easier, user-friendly, and safer for daily use.

This job required me to consume disturbing information about horrendous subjects such as child s*xual abuse, brutality, murder, suicide, torture, self-harm, and incest.

According to the report, part of it reads, “The premise of this project was simple: Feed an AI with examples of violence, hate speech, and sexual abuse, and that tool could learn how to detect toxic substances in the wild.” ChatGPT would include a detector to detect toxicity in its training data and filter it out. It could also remove toxic text from the training datasets of future AI models.

OpenAI sent thousands of text snippets to a Kenyan outsourcing company to get these labels. This was done beginning in November 2021. Much of that text seemed to have been pulled directly from the web’s darkest corners. Some of the text described graphically situations such as child s*xual abuse, brutality, murder, suicide, torture, or self-harm.

Sama is a training-data firm that specializes in annotating data for AI algorithms. OpenAI has given this contract to Sama.

According to Time Magazine, workers earned anywhere from $1.32 to $2 an hour, depending on their seniority and performance. This disparity in pay and job makes the job seem exploitative, even though their work is a major contributor to billion-dollar industries.

Time’s report was based on graphic details provided by Sama employees, who shared some of the most horrific things they had seen while working for OpenAI.

A portion of the report states, “The work’s traumatizing nature ultimately led Sama to cancel all its OpenAI work in February 2022, eight months earlier than originally planned.”

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd