Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 712

Loading...
Crime-Predicting Algorithms May Not Fare Much Better Than Untrained Humans
wired.com · 2018

The American criminal justice system couldn’t get much less fair. Across the country, some 1.5 million people are locked up in state and federal prisons. More than 600,000 people, the vast majority of whom have yet to be convicted of a crime, sit behind bars in local jails. Black people make up 40 percent of those incarcerated, despite accounting for just 13 percent of the US population.

With the size and cost of jails and prisons rising—not to mention the inherent injustice of the system—cities and states across the country have been lured by tech tools that promise to predict whether someone might commit a crime. These so-called risk assessment algorithms, currently used in states from California to New Jersey, crunch data about a defendant’s history—things like age, gender, and prior convictions—to help courts decide who gets bail, who goes to jail, and who goes free.

But as local governments adopt these tools, and lean on them to inform life-altering decisions, a fundamental question remains: What if these algorithms aren’t actually any better at predicting crime than humans are? What if recidivism isn’t actually that predictable at all?

That’s the question that Dartmouth College researchers Julia Dressel and Hany Farid set out to answer in a new paper published today in the journal Science Advances. They found that one popular risk-assessment algorithm, called Compas, predicts recidivism about as well as a random online poll of people who have no criminal justice training at all.

"There was essentially no difference between people responding to an online survey for a buck and this commercial software being used in the courts," says Farid, who teaches computer science at Dartmouth. "If this software is only as accurate as untrained people responding to an online survey, I think the courts should consider that when trying to decide how much weight to put on them in making decisions."

Man Vs Machine

While she was still a student at Dartmouth majoring in computer science and gender studies, Dressel came across a ProPublica investigation that showed just how biased these algorithms can be. That report analyzed Compas's predictions for some 7,000 defendants in Broward County, Florida, and found that the algorithm was more likely to incorrectly categorize black defendants as having a high risk of reoffending. It was also more likely to incorrectly categorize white defendants as low risk.

That was alarming enough. But Dressel also couldn't seem to find any research that studied whether these algorithms actually improved on human assessments.

'There was essentially no difference between people responding to an online survey for a buck and this commercial software being used in the courts.' Hany Farid, Dartmouth College

"Underlying the whole conversation about algorithms was this assumption that algorithmic prediction was inherently superior to human prediction," she says. But little proof backed up that assumption; this nascent industry is notoriously secretive about developing these models. So Dressel and her professor, Farid, designed an experiment to test Compas on their own.

Using Amazon Mechanical Turk, an online marketplace where people get paid small amounts to complete simple tasks, the researchers asked about 400 participants to decide whether a given defendant was likely to reoffend based on just seven pieces of data, not including that person's race. The sample included 1,000 real defendants from Broward County, because ProPublica had already made its data on those people, as well as information on whether they did in fact reoffend, public.

They divided the participants into groups, so that each turk assessed 50 defendants, and gave the following brief description:

The defendant is a [SEX] aged [AGE]. They have been charged with: [CRIME CHARGE]. This crime is classified as a [CRIMI- NAL DEGREE]. They have been convicted of [NON-JUVENILE PRIOR COUNT] prior crimes. They have [JUVENILE- FELONY COUNT] juvenile felony charges and [JUVENILE-MISDEMEANOR COUNT] juvenile misdemeanor charges on their record.

That's just seven data points, compared to the 137 that Compas amasses through its defendant questionnaire. In a statement, Equivant says it only uses six of those data points to make its predictions. Still, these untrained online workers were roughly as accurate in their predictions as Compas.

Overall, the turks predicted recidivism with 67 percent accuracy, compared to Compas' 65 percent. Even without access to a defendant's race, they also incorrectly predicted that black defendants would reoffend more often than they incorrectly predicted white defendants would reoffend, known as a false positive rate. That indicates that even when racial data isn't available, certain data points—like number of convictions—can become proxies for race, a central issue with eradicating bias in these algorithms. The Dartmouth researchers' false positive rate for black defendants was 37 percent, compared to 27 percent for

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd