Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 715

Associated Incidents

Incident 4021 Report
COMPAS Algorithm Reportedly Performs Poorly in Crime Recidivism Prediction

Loading...
Software 'no more accurate than untrained humans' at judging reoffending risk
theguardian.com · 2018

Program used to assess more than a million US defendants may not be accurate enough for potentially life-changing decisions, say experts

The credibility of a computer program used for bail and sentencing decisions has been called into question after it was found to be no more accurate at predicting the risk of reoffending than people with no criminal justice experience provided with only the defendant’s age, sex and criminal history.

The algorithm, called Compas (Correctional Offender Management Profiling for Alternative Sanctions), is used throughout the US to weigh up whether defendants awaiting trial or sentencing are at too much risk of reoffending to be released on bail.

Since being developed in 1998, the tool is reported to have been used to assess more than one million defendants. But a new paper has cast doubt on whether the software’s predictions are sufficiently accurate to justify its use in potentially life-changing decisions.

The algorithms that are already changing your life Read more

Hany Farid, a co-author of the paper and professor of computer science at Dartmouth College in New Hampshire, said: “The cost of being wrong is very high and at this point there’s a serious question over whether it should have any part in these decisions.”

The analysis comes as courts and police forces internationally are increasingly relying on computerised approaches to predict the likelihood of people reoffending and to identify potential crime hotspots where police resources should be concentrated. In the UK, East Midlands police force are trialling software called Valcri, aimed at generating plausible ideas about how, when and why a crime was committed as well as who did it, and Kent Police have been using predictive crime mapping software called PredPol since 2013.

The trend has raised concerns about whether such tools could introduce new forms of bias into the criminal justice system, as well as questions about the regulation of algorithms to ensure the decisions they reach are fair and transparent.

The latest analysis focuses on the more basic question of accuracy.

Farid, with colleague Julia Dressel, compared the ability of the software – which combines 137 measures for each individual – against that of untrained workers, contracted through Amazon’s Mechanical Turk online crowd-sourcing marketplace.

The academics used a database of more than 7,000 pretrial defendants from Broward County, Florida, which included individual demographic information, age, sex, criminal history and arrest record in the two year period following the Compas scoring.

The online workers were given short descriptions that included a defendant’s sex, age, and previous criminal history and asked whether they thought they would reoffend. Using far less information than Compas (seven variables versus 137), when the results were pooled the humans were accurate in 67% of cases, compared to the 65% accuracy of Compas.

In a second analysis, the paper found that Compas’s accuracy at predicting recidivism could also be matched using a simple calculation involving only an offender’s age and the number of prior convictions.

“When you boil down what the software is actually doing, it comes down to two things: your age and number of prior convictions,” said Farid. “If you are young and have a lot of prior convictions you are high risk.”

“As we peel the curtain away on these proprietary algorithms, the details of which are closely guarded, it doesn’t look that impressive,” he added. “It doesn’t mean we shouldn’t use it, but judges and courts and prosecutors should understand what is behind this.”

Seena Fazel, a professor of forensic psychiatry at the University of Oxford, agreed that the inner workings of such risk assessment tools ought to be made public so that they can be scrutinised.

However, he said that in practice, such algorithms were not used to provide a “yes or no” answer, but were useful in giving gradations of risk and highlighting areas of vulnerability – for instance, recommending that a person be assigned a drug support worker on release from prison.

“I don’t think you can say these algorithms have no value,” he said. “There’s lots of other evidence suggesting they are useful.”

Rise of the racist robots – how AI is learning all our worst impulses Read more

The paper also highlights the potential for racial asymmetries in the outputs of such software that can be difficult to avoid – even if the software itself is unbiased.

The analysis showed that while the accuracy of the software was the same for black and white defendants, the so-called false positive rate (when someone who does not go on to offend is classified as high risk) was higher for black than for white defendants. This kind of asymmetry is mathematically inevitable in the case where two populations have a different underlying rate of reoffending – in the Florida data set the black defendants were more likely to reoffend – but such disparities nonetheless raise thorny q

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd