Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
自動翻訳済み

インシデント 43: 人種差別的なAIの行動は新しい問題ではない

自動翻訳済み
概要:
自動翻訳済み
1982年から1986年にかけて、セントジョージ病院医学部は入学手続きの一部を自動化するプログラムを使用していましたが、その結果、女性や少数民族に対する差別が起こりました。

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
Alleged: Dr. Geoffrey Franglen developed an AI system deployed by St George's Hospital Medical School, which harmed Women と Minority Groups.

インシデントのステータス

インシデントID
43
レポート数
4
インシデント発生日
1998-03-05
エディタ
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 分類法のクラス

分類法の詳細

Incident Number

The number of the incident in the AI Incident Database.
 

43

Notes (special interest intangible harm)

Input any notes that may help explain your answers.
 

The Commission for Racial Equality found St. George's Hospital Medical School guilty of discrimination against women and members of ethnic minorities.

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

yes

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

1979

CSETv0 分類法のクラス

分類法の詳細

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Specification

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Medium

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Amateur

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

Standardized university admission form, Previous admission and regection decisions

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

+1
A Blot on the Profession
Computers which magnify our prejudices+1
Racist AI behaviour is not a new problem
A Blot on the Profession

A Blot on the Profession

europepmc.org

Computers which magnify our prejudices

Computers which magnify our prejudices

marginalrevolution.com

Racist AI behaviour is not a new problem

Racist AI behaviour is not a new problem

natbuckley.co.uk

Racist in the Machine

Racist in the Machine

read.dukeupress.edu

A Blot on the Profession
europepmc.org · 1998

A Blot on the Profession

Discrimination in medicine against women and members of ethnic minorities has long been suspected, but it has now been proved. St George's Hospital Medical School has been found guilty by the Commission for Racial E…

Computers which magnify our prejudices
marginalrevolution.com · 2013

As AI spreads, this will become an increasingly important and controversial issue:

For one British university, what began as a time-saving exercise ended in disgrace when a computer model set up to streamline its admissions process exposed …

Racist AI behaviour is not a new problem
natbuckley.co.uk · 2016

Professor Margaret Boden, an AI and cognitive science researcher, took the time to speak to me in 2010 about computers, AI, morality and the future. One of the stories she told me comes back to me every now and then, most recently by Micros…

Racist in the Machine
read.dukeupress.edu · 2016

Companies and governments need to pay attention to the unconscious and institutional biases that seep into their algorithms, argues cybersecurity expert Megan Garcia. Distorted data can skew results in web searches, home loan decisions, or …

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください
似たようなものを見つけましたか?

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Female Applicants Down-Ranked by Amazon Recruiting Tool

Female Applicants Down-Ranked by Amazon Recruiting Tool

Aug 2016 · 33 レポート
AI Beauty Judge Did Not Like Dark Skin

AI Beauty Judge Did Not Like Dark Skin

Sep 2016 · 10 レポート
Sexist and Racist Google Adsense Advertisements

Sexist and Racist Google Adsense Advertisements

Jan 2013 · 27 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Female Applicants Down-Ranked by Amazon Recruiting Tool

Female Applicants Down-Ranked by Amazon Recruiting Tool

Aug 2016 · 33 レポート
AI Beauty Judge Did Not Like Dark Skin

AI Beauty Judge Did Not Like Dark Skin

Sep 2016 · 10 レポート
Sexist and Racist Google Adsense Advertisements

Sexist and Racist Google Adsense Advertisements

Jan 2013 · 27 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d414e0f