Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2859

Associated Incidents

Incident 5023 Report
Pennsylvania County's Family Screening Tool Allegedly Exhibited Discriminatory Effects

Loading...
The Devil is in the Details: Interrogating Values Embedded in the Allegheny Family Screening Tool
aclu.org · 2023

Introduction

In 2017, the creators of the Allegheny Family Screening Tool (AFST) published a report describing the development process for a predictive tool used to inform responses to calls to Allegheny County, Pennsylvania’s child welfare agency about alleged child neglect. (The tool is not used to make screening decisions for allegations that include abuse or severe neglect, which are required to be investigated by state law.)”

In a footnote on page 14 of that report, the tool creators described their decisions in a key component of the variable selection process — selecting a threshold for feature selection — as “rather arbitrary” and based on “trial and error”. Within this short aside lies an honest assessment of how the creators of predictive tools often view the development process: a process in which they have free rein to make choices they view as purely technical, even if those choices are made arbitrarily. In reality, design decisions made in the development of algorithmic tools are not just technical processes — they also include ethical choices, value judgments, and policy decisions. For example, the “rather arbitrary” threshold used in feature selection could have determined whether a family’s behavioral health diagnoses or history of eligibility for public benefit programs would impact their likelihood of being investigated by the county’s child welfare agency. When developers cast these kinds of design decisions as primarily technical questions, they may disguise them as objective, even though they may be made arbitrarily, out of convenience, or based on flawed logic.

In this work, we demonstrate how algorithmic design choices function as policy decisions through an audit of the AFST. We highlight three values embedded in the AFST through an analysis of design decisions made in the model development process and discuss their impacts on families evaluated by the tool. specifically, we explore the following design decisions:

  • Risky by association: The AFST’s method of grouping risk scores presents a misleading picture of families evaluated by the tool and treats families as “risky” by association.
  • The more data the better: The county’s stated goal to “make decisions based on as much information as possible” comes at the expense of already impacted and marginalized communities — as demonstrated through the use of data from the criminal legal system and behavioral health systems despite historical and ongoing disparities in the communities targeted by those systems.
  • Marked in perpetuity: In using features that families cannot change, the AFST effectively offers families no way to escape their pasts, compounding the impacts of systemic harm and providing no meaningful opportunity for recourse.

Conclusion

A 2017 ethical analysis of the AFST described “predictive risk modeling tools” in general as “more accurate than any alternative” and “more transparent than alternatives”. In its response to this analysis, the county similarly called the AFST “more accurate” and “inherently more transparent” than current decision-making strategies. But when tools like the AFST are created with arbitrary design decisions, give families no opportunity for recourse, perpetuate racial bias, and score people who may have disabilities as inherently “riskier,” this default assumption of the inherent objectivity of algorithmic tools — and the use of the tools altogether — must seriously be called into question. Here, we focused on a limited set of design decisions related to a particular version of the AFST. To our knowledge, several of these design decisions are still shaping the deployed version of the tool, though we were only able to analyze the impacts of these decisions for V2.1 of the tool. We hope future work will expand upon this analysis to improve our understanding of the AFST as well as other algorithmic tools used in these contexts, including structured decision-making tools.

In contrast to debates about how to make algorithms that function in contexts marked by pervasive and entrenched discrimination “fair” or “accurate,” Green and Mohamed et al. propose new frameworks that focus instead on connecting our understanding of algorithmic oppression to the broader social and economic contexts in which algorithms operate to evaluate whether algorithms can actually be designed to promote justice. In the years since the initial development of the AFST, impacted community members and others who interact with the AFST have expressed concerns about racial bias and suggested alternatives to the AFST, including non-technical changes to the county’s practices such as improving hiring and training conditions for workers, changes to state laws that affect the family regulation system, and reimagining relationships between community members and the agency. Yet similar tools created by largely the same team of researchers that created the AFST have recently been deployed in Douglas County, Colorado and Los Angeles County, California. The AFST’s developers continue to propose additional use cases for these kinds of predictive tools that rely on biased data sources, even positing that they can be used to reduce racial bias in the family regulation system as part of “racial equity feedback loops”. But as Sasha Costanza-Chock poses in her 2020 book Design Justice, “why do we continue to design technologies that reproduce existing systems of power inequality when it is so clear to so many that we urgently need to dismantle those systems?”

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 3303e65