Description: A coalition of 15 human rights groups has launched legal action against the French government alleging that an algorithm used to detect welfare fraud discriminates against single mothers and disabled people. The algorithm assigns risk scores based on personal data. The process allegedly subjects vulnerable recipients to invasive investigations, violates privacy and anti-discrimination laws, and disproportionately affects marginalized groups.
Editor Notes: Reconstructing the timeline of events: (1) Since the 2010s: The algorithm has been in use to detect errors and fraud in France’s welfare system. (2) 2014: One version of the algorithm scored single-parent families, particularly those recently divorced, and disabled individuals receiving the Allocation Adulte Handicapé (AAH) as higher risk. (3) 2020: A suspected update to the algorithm took place, though the CNAF has not publicly shared the source code of the current model. (4) October 15, 2024: A coalition of 15 human rights groups, including La Quadrature du Net and Amnesty International, filed a legal challenge in France’s top administrative court, arguing the algorithm discriminates against marginalized groups.
Entities
View all entitiesAlleged: Government of France developed an AI system deployed by Caisse Nationale des Allocations Familiales (CNAF), which harmed Allocation Adulte Handicapé recipients , Disabled people in France , Single mothers in France and French general public.
Incident Stats
Incident ID
822
Report Count
2
Incident Date
2024-10-15
Editors
Daniel Atherton
Incident Reports
Reports Timeline
wired.com · 2024
- View the original report at its source
- View the report at the Internet Archive
A coalition of human rights groups have today launched legal action against the French government over its use of algorithms to detect miscalculated welfare payments, alleging they discriminate against disabled people and single mothers.
Th…
amnesty.org · 2024
- View the original report at its source
- View the report at the Internet Archive
The French authorities must immediately stop the use of a discriminatory risk-scoring algorithm used by the French Social Security Agency’s National Family Allowance Fund (CNAF), which is used to detect overpayments and errors regarding ben…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.