Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

インシデント 40: COMPAS Algorithm Performs Poorly in Crime Recidivism Prediction

概要: Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a recidivism risk-assessment algorithmic tool used in the judicial system to assess likelihood of defendants' recidivism, is found to be less accurate than random untrained human evaluators.

ツール

新しいレポート新しいレポート新しいレスポンス新しいレスポンス発見する発見する履歴を表示履歴を表示

組織

すべての組織を表示
推定: Equivantが開発し提供したAIシステムで、Accused Peopleに影響を与えた

インシデントのステータス

インシデントID
40
レポート数
22
インシデント発生日
2016-05-23
エディタ
Sean McGregor
Applied Taxonomies
CSETv0, CSETv1, GMF, MIT

CSETv1 分類法のクラス

分類法の詳細

Incident Number

The number of the incident in the AI Incident Database.
 

40

AI Tangible Harm Level Notes

Notes about the AI tangible harm level assessment
 

CSET considers wrongful detention, wrongful imprisonment, and wrongful differential/disproportionate imprisonment amounts to be tangible harm, because of the loss of physical freedom and autonomy.

CSETv0 分類法のクラス

分類法の詳細

Problem Nature

Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.
 

Unknown/unclear

Physical System

Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
 

Software only

Level of Autonomy

The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
 

Medium

Nature of End User

"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
 

Amateur

Public Sector Deployment

"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
 

Yes

Data Inputs

A brief description of the data that the AI system(s) used or were trained on.
 

Questionnaire consisting of 137 factors like age, prior convictions, criminal records

MIT 分類法のクラス

Machine-Classified
分類法の詳細

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

1.1. Unfair discrimination and misrepresentation

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. Discrimination and Toxicity

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

インシデントレポート

レポートタイムライン

+1
How We Analyzed the COMPAS Recidivism Algorithm
+1
Inspecting Algorithms for Bias
+12
ProPublica Is Wrong In Charging Racial Bias In An Algorithm
Sentence by Numbers: The Scary Truth Behind Risk Assessment AlgorithmsCan Racial Bias Ever Be Removed From Criminal Justice Algorithms?AI is convicting criminals and determining jail time, but is it fair?Study finds algorithm no better than random people at predicting recidivism+1
Amicus Brief in New Jersey v. Pickett
How We Analyzed the COMPAS Recidivism Algorithm

How We Analyzed the COMPAS Recidivism Algorithm

propublica.org

Inspecting Algorithms for Bias

Inspecting Algorithms for Bias

technologyreview.com

When a Computer Program Keeps You in Jail

When a Computer Program Keeps You in Jail

nytimes.com

ProPublica Is Wrong In Charging Racial Bias In An Algorithm

ProPublica Is Wrong In Charging Racial Bias In An Algorithm

acsh.org

Mechanical Turkers out-predicted COMPAS, a major judicial algorithm

Mechanical Turkers out-predicted COMPAS, a major judicial algorithm

theverge.com

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

A Popular Algorithm Is No Better at Predicting Crimes Than Random People

theatlantic.com

The accuracy, fairness, and limits of predicting recidivism

The accuracy, fairness, and limits of predicting recidivism

science.org

Are programs better than people at predicting reoffending?

Are programs better than people at predicting reoffending?

economist.com

Software 'no more accurate than untrained humans' at judging reoffending risk

Software 'no more accurate than untrained humans' at judging reoffending risk

theguardian.com

Bail Algorithms Are As Accurate As Random People Doing an Online Survey

Bail Algorithms Are As Accurate As Random People Doing an Online Survey

motherboard.vice.com

Court software may be no more accurate than web survey takers in predicting criminal risk

Court software may be no more accurate than web survey takers in predicting criminal risk

sciencedaily.com

Crime-Predicting Algorithms May Not Fare Much Better Than Untrained Humans

Crime-Predicting Algorithms May Not Fare Much Better Than Untrained Humans

wired.com

Common Computer Program Predicts Recidivism as Poorly as Humans

Common Computer Program Predicts Recidivism as Poorly as Humans

inverse.com

Algorithms Are No Better at Predicting Repeat Offenders Than Inexperienced Humans

Algorithms Are No Better at Predicting Repeat Offenders Than Inexperienced Humans

futurism.com

Study Finds Crime-Predicting Algorithm Is No Smarter Than Online Poll Takers

Study Finds Crime-Predicting Algorithm Is No Smarter Than Online Poll Takers

gizmodo.com.au

Criminal Sentencing Algorithm No More Accurate Than Random People on the Internet

Criminal Sentencing Algorithm No More Accurate Than Random People on the Internet

pbs.org

Sentence by Numbers: The Scary Truth Behind Risk Assessment Algorithms

Sentence by Numbers: The Scary Truth Behind Risk Assessment Algorithms

digitalethics.org

Can Racial Bias Ever Be Removed From Criminal Justice Algorithms?

Can Racial Bias Ever Be Removed From Criminal Justice Algorithms?

psmag.com

AI is convicting criminals and determining jail time, but is it fair?

AI is convicting criminals and determining jail time, but is it fair?

weforum.org

Study finds algorithm no better than random people at predicting recidivism

Study finds algorithm no better than random people at predicting recidivism

privacyinternational.org

Amicus Brief in New Jersey v. Pickett

Amicus Brief in New Jersey v. Pickett

upturn.org

Artificial Intelligence, Algorithmic Racism and the Canadian Criminal Justice System

Artificial Intelligence, Algorithmic Racism and the Canadian Criminal Justice System

slaw.ca

How We Analyzed the COMPAS Recidivism Algorithm
propublica.org · 2016

← Read the story

Across the nation, judges, probation and parole officers are increasingly using algorithms to assess a criminal defendant’s likelihood of becoming a recidivist – a term used to describe criminals who re-offend. There are do…

Inspecting Algorithms for Bias
technologyreview.com · 2017

It was a striking story. “Machine Bias,” the headline read, and the teaser proclaimed: “There’s software used across the country to predict future criminals. And it’s biased against blacks.”

ProPublica, a Pulitzer Prize–winning nonprofit ne…

When a Computer Program Keeps You in Jail
nytimes.com · 2017

The criminal justice system is becoming automated. At every stage — from policing and investigations to bail, evidence, sentencing and parole — computer systems play a role. Artificial intelligence deploys cops on the beat. Audio sensors ge…

ProPublica Is Wrong In Charging Racial Bias In An Algorithm
acsh.org · 2018

Predicting the future is not only the provenance of fortune tellers or media pundits. Predictive algorithms, based on extensive datasets and statistics have overtaken wholesale and retail operations as any online shopper knows. And in the l…

Mechanical Turkers out-predicted COMPAS, a major judicial algorithm
theverge.com · 2018

Our most sophisticated crime-predicting algorithms may not be as good as we thought. A study published today in Science Advances takes a look at the popular COMPAS algorithm — used to assess the likelihood that a given defendant will reoffe…

A Popular Algorithm Is No Better at Predicting Crimes Than Random People
theatlantic.com · 2018

Caution is indeed warranted, according to Julia Dressel and Hany Farid from Dartmouth College. In a new study, they have shown that COMPAS is no better at predicting an individual’s risk of recidivism than random volunteers recruited from t…

The accuracy, fairness, and limits of predicting recidivism
science.org · 2018

Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. These predictions are used in pretrial, parole, and sentencing decisions. Proponents of these systems argue that big d…

Are programs better than people at predicting reoffending?
economist.com · 2018

IN AMERICA, computers have been used to assist bail and sentencing decisions for many years. Their proponents argue that the rigorous logic of an algorithm, trained with a vast amount of data, can make judgments about whether a convict will…

Software 'no more accurate than untrained humans' at judging reoffending risk
theguardian.com · 2018

Program used to assess more than a million US defendants may not be accurate enough for potentially life-changing decisions, say experts

The credibility of a computer program used for bail and sentencing decisions has been called into quest…

Bail Algorithms Are As Accurate As Random People Doing an Online Survey
motherboard.vice.com · 2018

Algorithms that assess people’s likelihood to reoffend as part of the bail-setting process in criminal cases are, to be frank, really scary.

We don’t know very much about how they work—the companies that make them are intensely secretive ab…

Court software may be no more accurate than web survey takers in predicting criminal risk
sciencedaily.com · 2018

A widely-used computer software tool may be no more accurate or fair at predicting repeat criminal behavior than people with no criminal justice experience, according to a Dartmouth College study.

The Dartmouth analysis showed that non-expe…

Crime-Predicting Algorithms May Not Fare Much Better Than Untrained Humans
wired.com · 2018

The American criminal justice system couldn’t get much less fair. Across the country, some 1.5 million people are locked up in state and federal prisons. More than 600,000 people, the vast majority of whom have yet to be convicted of a crim…

Common Computer Program Predicts Recidivism as Poorly as Humans
inverse.com · 2018

Just like a professional chef or a heart surgeon, a machine learning algorithm is only as good as the training it receives. And as algorithms increasingly take the reigns and make decisions for humans, we’re finding out that a lot of them d…

Algorithms Are No Better at Predicting Repeat Offenders Than Inexperienced Humans
futurism.com · 2018

Predicting Recidivism

Recidivism is the likelihood of a person convicted of a crime to offend again. Currently, this rate is determined by predictive algorithms. The outcome can affect everything from sentencing decisions to whether or not …

Study Finds Crime-Predicting Algorithm Is No Smarter Than Online Poll Takers
gizmodo.com.au · 2018

In a study published Wednesday, a pair of Dartmouth researchers found that a popular risk assessment algorithm was no better at predicting a criminal offender's likelihood of reoffending than an internet survey of humans with little or no r…

Criminal Sentencing Algorithm No More Accurate Than Random People on the Internet
pbs.org · 2018

Receive emails about upcoming NOVA programs and related content, as well as featured reporting about current events through a science lens. Email Address Zip Code Subscribe

An “unbiased” computer algorithm used for informing judicial decisi…

Sentence by Numbers: The Scary Truth Behind Risk Assessment Algorithms
digitalethics.org · 2018

Although crime rates have fallen steadily since the 1990s, rates of recidivism remain a factor in the areas of both public safety and prisoner management. The National Institute of Justice defines recidivism as “criminal acts that resulted …

Can Racial Bias Ever Be Removed From Criminal Justice Algorithms?
psmag.com · 2018

(Photo: Joe Raedle/Getty Images)

Dozens of people packed into a Philadelphia courtroom on June 6th to voice their objections to a proposed criminal justice algorithm. The algorithm, developed by the Pennsylvania Commission on Sentencing, wa…

AI is convicting criminals and determining jail time, but is it fair?
weforum.org · 2018

When Netflix gets a movie recommendation wrong, you’d probably think that it’s not a big deal. Likewise, when your favourite sneakers don’t make it into Amazon’s list of recommended products, it’s probably not the end of the world. But when…

Study finds algorithm no better than random people at predicting recidivism
privacyinternational.org · 2019

In a study of COMPAS, an algorithmic tool used in the US criminal justice system , Dartmouth College researchers Julia Dressel and Hany Farid found that the algorithm did no better than volunteers recruited via a crowdsourcing site. COMPAS,…

Amicus Brief in New Jersey v. Pickett
upturn.org · 2020

PRELIMINARY STATEMENT AND STATEMENT OF INTEREST

Independent and adversarial review of software used in the

criminal legal system is necessary to protect the courts from

unreliable evidence and to ensure that the introduction of new

technolo…

Artificial Intelligence, Algorithmic Racism and the Canadian Criminal Justice System
slaw.ca · 2020

Recidivism risk assessment is the process of determining the likelihood that an accused, convicted, or incarcerated persons will reoffend. The process is aimed at assisting in the determination of the appropriate limitation on the freedom o…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Northpointe Risk Models

Machine Bias - ProPublica

May 2016 · 15 レポート
Predictive Policing Biases of PredPol

Policing the Future

Nov 2015 · 17 レポート
Sexist and Racist Google Adsense Advertisements

Discrimination in Online Ad Delivery

Jan 2013 · 27 レポート
前のインシデント次のインシデント

よく似たインシデント

テキスト類似度による

Did our AI mess up? Flag the unrelated incidents

Northpointe Risk Models

Machine Bias - ProPublica

May 2016 · 15 レポート
Predictive Policing Biases of PredPol

Policing the Future

Nov 2015 · 17 レポート
Sexist and Racist Google Adsense Advertisements

Discrimination in Online Ad Delivery

Jan 2013 · 27 レポート

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 1420c8e