Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 92: Apple Card's Credit Assessment Algorithm Allegedly Discriminated against Women

概要: Apple Card's credit assessment algorithm was reported by Goldman-Sachs customers to have shown gender bias, in which men received significantly higher credit limits than women with equal credit qualifications.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
Alleged: Apple developed an AI system deployed by Goldman-Sachs, which harmed Apple Card female users と Apple Card female credit applicants.

インシデントのステータス

インシデントID
92
レポート数
6
インシデント発生日
2019-11-11
エディタ
Sean McGregor, Khoa Lam
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 分類法のクラス

分類法の詳細

Incident Number

The number of the incident in the AI Incident Database.
 

92

AI Tangible Harm Level Notes

Notes about the AI tangible harm level assessment
 

There was a gender bias in the rates and credit limits offered by the Apple card. This results in financial harm based on gender.

Special Interest Intangible Harm

An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
 

yes

Date of Incident Year

The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank. Enter in the format of YYYY
 

2019

Date of Incident Month

The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank. Enter in the format of MM
 

11

Estimated Date

“Yes” if the data was estimated. “No” otherwise.
 

No

CSETv0 分類法のクラス

分類法の詳細

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Specification

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

High

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Amateur

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

No

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

credit score, credit report, reported income

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

+4
Apple Card algorithm sparks gender bias allegations against Goldman Sachs
Gender Bias Complaints against Apple Card Signal a Dark Side to FintechHow the law got it wrong with Apple Card
Apple Card algorithm sparks gender bias allegations against Goldman Sachs

Apple Card algorithm sparks gender bias allegations against Goldman Sachs

washingtonpost.com

The Apple Card Is the Most High-Profile Case of AI Bias Yet

The Apple Card Is the Most High-Profile Case of AI Bias Yet

designnews.com

The Apple Card algo issue: What you need to know about A.I. in everyday life

The Apple Card algo issue: What you need to know about A.I. in everyday life

cnbc.com

Goldman Sachs’ misguided World Cup predictions could provide clues to the Apple Card controversy

Goldman Sachs’ misguided World Cup predictions could provide clues to the Apple Card controversy

qz.com

Gender Bias Complaints against Apple Card Signal a Dark Side to Fintech

Gender Bias Complaints against Apple Card Signal a Dark Side to Fintech

hbswk.hbs.edu

How the law got it wrong with Apple Card

How the law got it wrong with Apple Card

techcrunch.com

Apple Card algorithm sparks gender bias allegations against Goldman Sachs
washingtonpost.com · 2019

What started with a viral Twitter thread metastasized into a regulatory investigation of Goldman Sachs’ credit card practices after a prominent software developer called attention to differences in Apple Card credit lines for male and femal…

The Apple Card Is the Most High-Profile Case of AI Bias Yet
designnews.com · 2019

The algorithm responsible for credit decisions for the Apple Card is giving females lower credit limits than equally qualified males. Those are the allegations that began spreading as consumers took to social media with complaints about App…

The Apple Card algo issue: What you need to know about A.I. in everyday life
cnbc.com · 2019

When tech entrepreneur David Heinmeier Hansson recently took to Twitter saying the Apple Card gave him a credit limit that was 20 times higher than his wife's, despite the fact that she had a higher credit score, it may have been the first …

Goldman Sachs’ misguided World Cup predictions could provide clues to the Apple Card controversy
qz.com · 2019

US regulators are investigating whether Apple’s credit card, launched in August, is biased against women. Software engineer David Heinemeier Hansson reported on social media that Apple had offered him a spending limit 20 times higher than h…

Gender Bias Complaints against Apple Card Signal a Dark Side to Fintech
hbswk.hbs.edu · 2019

The possibility that Apple Card applicants were subject to gender bias opens a new frontier for the financial services sector in which regulators are largely absent, argues Karen Mills.

In late August, the Apple Card debuted with a minimali…

How the law got it wrong with Apple Card
techcrunch.com · 2021

Advocates of algorithmic justice have begun to see their proverbial “days in court” with legal investigations of enterprises like UHG and Apple Card. The Apple Card case is a strong example of how current anti-discrimination laws fall short…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Twitter’s Image Cropping Tool Allegedly Showed Gender and Racial Bias

Twitter's Photo Crop Algorithm Favors White Faces and Women

Sep 2020 · 5 レポート
Facebook’s Hate Speech Detection Algorithms Allegedly Disproportionately Failed to Remove Racist Content towards Minority Groups

Facebook’s race-blind practices around hate speech came at the expense of Black users, new documents show

Nov 2021 · 2 レポート
Amazon’s Robotic Fulfillment Centers Have Higher Serious Injury Rates

Amazon’s robotic fulfillment centers have higher serious injury rates

Sep 2020 · 3 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Twitter’s Image Cropping Tool Allegedly Showed Gender and Racial Bias

Twitter's Photo Crop Algorithm Favors White Faces and Women

Sep 2020 · 5 レポート
Facebook’s Hate Speech Detection Algorithms Allegedly Disproportionately Failed to Remove Racist Content towards Minority Groups

Facebook’s race-blind practices around hate speech came at the expense of Black users, new documents show

Nov 2021 · 2 レポート
Amazon’s Robotic Fulfillment Centers Have Higher Serious Injury Rates

Amazon’s robotic fulfillment centers have higher serious injury rates

Sep 2020 · 3 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e