Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 106: Korean Chatbot Luda Made Offensive Remarks towards Minority Groups

概要: A Korean interactive chatbot was shown in screenshots to have used derogatory and bigoted language when asked about lesbians, Black people, and people with disabilities.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
Alleged: Scatter Lab developed an AI system deployed by Facebook Messenger, which harmed Korean Facebook Messenger users , Korean people of gender minorities と Korean people with disabilities.

インシデントのステータス

インシデントID
106
レポート数
13
インシデント発生日
2020-12-23
エディタ
Sean McGregor, Khoa Lam
Applied Taxonomies
GMF, CSETv1, MIT

CSETv1 分類法のクラス

分類法の詳細

Incident Number

The number of the incident in the AI Incident Database.
 

106

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

yes

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

2021

Date of Incident Month

The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank. Enter in the format of MM
 

01

Estimated Date

“Yes” if the data was estimated. “No” otherwise.
 

No

Multiple AI Interaction

“Yes” if two or more independently operating AI systems were involved. “No” otherwise.
 

no

GMF 分類法のクラス

分類法の詳細

Known AI Goal Snippets

One or more snippets that justify the classification.
 

(Snippet Text: Interactive chatbot ‘Luda,’ subjected to sexual harassment and taught hate speech  

, Related Classifications: Chatbot)

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.2. Exposure to toxic content

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

Incident Occurrence+4
CEO says controversial AI chatbot ‘Luda’ will socialize in time
+2
South Korean chatbot 'Lee Luda' killed off for spewing hate
AI chatbot mired in legal dispute over data collectionCivic groups file petition over human rights violations by chatbot LudaAI Chatbot ‘Lee Luda’ and Data EthicsA South Korean Chatbot Shows Just How Sloppy Tech Companies Can Be With User Data(2nd LD) Developer of AI chatbot service fined for massive personal data breach
CEO says controversial AI chatbot ‘Luda’ will socialize in time

CEO says controversial AI chatbot ‘Luda’ will socialize in time

koreaherald.com

AI Chatbot Shut Down After Learning to Talk Like a Racist Asshole

AI Chatbot Shut Down After Learning to Talk Like a Racist Asshole

vice.com

Korea’s controversial AI chatbot Luda to be shut down temporarily

Korea’s controversial AI chatbot Luda to be shut down temporarily

pulsenews.co.kr

(News Focus) Chatbot Luda controversy leave questions over AI ethics, data collection

(News Focus) Chatbot Luda controversy leave questions over AI ethics, data collection

en.yna.co.kr

South Korean AI chatbot pulled from Facebook after hate speech towards minorities

South Korean AI chatbot pulled from Facebook after hate speech towards minorities

theguardian.com

South Korean chatbot 'Lee Luda' killed off for spewing hate

South Korean chatbot 'Lee Luda' killed off for spewing hate

inputmag.com

Chatbot shut down after saying it 'hates lesbians' and using racist slurs

Chatbot shut down after saying it 'hates lesbians' and using racist slurs

thenextweb.com

Chatbot Gone Awry Starts Conversations About AI Ethics in South Korea

Chatbot Gone Awry Starts Conversations About AI Ethics in South Korea

thediplomat.com

AI chatbot mired in legal dispute over data collection

AI chatbot mired in legal dispute over data collection

koreaherald.com

Civic groups file petition over human rights violations by chatbot Luda

Civic groups file petition over human rights violations by chatbot Luda

koreaherald.com

AI Chatbot ‘Lee Luda’ and Data Ethics

AI Chatbot ‘Lee Luda’ and Data Ethics

medium.com

A South Korean Chatbot Shows Just How Sloppy Tech Companies Can Be With User Data

A South Korean Chatbot Shows Just How Sloppy Tech Companies Can Be With User Data

slate.com

(2nd LD) Developer of AI chatbot service fined for massive personal data breach

(2nd LD) Developer of AI chatbot service fined for massive personal data breach

en.yna.co.kr

CEO says controversial AI chatbot ‘Luda’ will socialize in time
koreaherald.com · 2021

Interactive chatbot ‘Luda,’ subjected to sexual harassment and taught hate speech  

Korean firm Scatter Lab has defended its Lee Luda chatbot in response to calls to end the service after the bot began sending offensive comments and was sub…

AI Chatbot Shut Down After Learning to Talk Like a Racist Asshole
vice.com · 2021

Imitating humans, the Korean chatbot Luda was found to be racist and homophobic.

A social media-based chatbot developed by a South Korean startup was shut down on Tuesday after users complained that it was spewing vulgarities and hate speec…

Korea’s controversial AI chatbot Luda to be shut down temporarily
pulsenews.co.kr · 2021

South Korea’s AI chatbot Lee Luda (Luda) will be temporarily suspended after coming under fire for its discriminatory and vulgar statements, as well as privacy breach allegations.

“We will return with an improved service after addressing th…

(News Focus) Chatbot Luda controversy leave questions over AI ethics, data collection
en.yna.co.kr · 2021

SEOUL, Jan. 13 (Yonhap) -- Today's chatbots are smarter, more responsive and more useful in businesses across sectors, and the artificial intelligence-powered tools are constantly evolving to even become friends with people.

Emotional chatb…

South Korean AI chatbot pulled from Facebook after hate speech towards minorities
theguardian.com · 2021

Lee Luda, built to emulate a 20-year-old Korean university student, engaged in homophobic slurs on social media

A popular South Korean chatbot has been suspended after complaints that it used hate speech towards sexual minorities in convers…

South Korean chatbot 'Lee Luda' killed off for spewing hate
inputmag.com · 2021

The bot said it 'really hates' lesbians, amongst other awful things.

A chatbot with the persona of a 20-year-old female college student has been shut down for using a shocking range of hate speech, including telling one user it “really hate…

Chatbot shut down after saying it 'hates lesbians' and using racist slurs
thenextweb.com · 2021

A South Korean Facebook chatbot has been shut down after spewing hate speech about Black, lesbian, disabled, and trans people.

Lee Luda, a conversational bot that mimics the personality of a 20-year-old female college student, told one user…

Chatbot Gone Awry Starts Conversations About AI Ethics in South Korea
thediplomat.com · 2021

The “Luda” AI chatbot sparked a necessary debate about AI ethics as South Korea places new emphasis on the technology.

In Spike Jonze’s 2013 film, “Her,” the protagonist falls in love with an operating system, raising questions about the ro…

AI chatbot mired in legal dispute over data collection
koreaherald.com · 2021

Artificial intelligence-based chatbot Lee Luda, which ended this month in ethical and data collection controversy, faces lawsuits on charges of violating personal information.

On Friday, around 400 people filed a class action suit against t…

Civic groups file petition over human rights violations by chatbot Luda
koreaherald.com · 2021

South Korean civic groups on Wednesday filed a petition with the country’s human rights watchdog over a now-suspended artificial intelligence chatbot for its prejudiced and offensive language against women and minorities.

An association of …

AI Chatbot ‘Lee Luda’ and Data Ethics
medium.com · 2021

The case of Lee Luda has aroused the public’s attention to the personal data management and AI in South Korea.

Lee Luda, an AI Chatbot with Natural Tone

Last December, an AI start-up company in South Korea, ScatterLab, launched an AI chatbo…

A South Korean Chatbot Shows Just How Sloppy Tech Companies Can Be With User Data
slate.com · 2021

“I am captivated by a sense of fear I have never experienced in my entire life …” a user named Heehit wrote in a Google Play review of an app called Science of Love. This review was written right after news organizations accused the app’s p…

(2nd LD) Developer of AI chatbot service fined for massive personal data breach
en.yna.co.kr · 2021

SEOUL, April 28 (Yonhap) -- South Korea's data protection watchdog on Wednesday imposed a hefty monetary penalty on a startup for leaking a massive amount of personal information in the process of developing and commercializing a controvers…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

TayBot

Danger, danger! 10 alarming examples of AI gone wild

Mar 2016 · 28 レポート
Russian Chatbot Supports Stalin and Violence

Russian AI chatbot found supporting Stalin and violence two weeks after launch

Oct 2017 · 5 レポート
Twitter’s Image Cropping Tool Allegedly Showed Gender and Racial Bias

Twitter's Photo Crop Algorithm Favors White Faces and Women

Sep 2020 · 5 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

TayBot

Danger, danger! 10 alarming examples of AI gone wild

Mar 2016 · 28 レポート
Russian Chatbot Supports Stalin and Violence

Russian AI chatbot found supporting Stalin and violence two weeks after launch

Oct 2017 · 5 レポート
Twitter’s Image Cropping Tool Allegedly Showed Gender and Racial Bias

Twitter's Photo Crop Algorithm Favors White Faces and Women

Sep 2020 · 5 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e