
In a recent National Audit Office (NAO) report concerning the Department for Work and Pensions (DWP) financial accounts, it has come to light that the DWP is expanding its use of machine learning for identifying potential benefits fraud.
Since 2021, the DWP has used a machine learning model to flag potentially fraudulent claims for University Credit (UC) advances. The model was created by training an algorithm using fraud referrals and historic claimant data, and it makes predictions about which new benefits claims could be fraudulent or contain errors.
When a claim scores above a certain threshold, it gets referred to a caseworker for review. The caseworker then performs a manual review of the claim.
Responding in 2021, Big Brother Watch, the civil liberties and privacy campaigning organisation, said that “Leaving a computer to decide whose benefits application needs to be reviewed is an invasion of privacy and opens the door for unfairness and discrimination in the welfare system.”
The NAO report published last Thursday highlighted that the DWP is set to invest around £70 million between the 2022-23 and 2024-25 financial years into “advanced analytics” in a move to deepen its anti-fraud technological capabilities.
Further, it underscored that since last year, similar machine learning models have been designed and piloted to prevent fraud in four “key” risk areas of Universal Credit — people living together, self-employment, capital, and housing.
Despite the DWP expecting “advanced analytics” to help it generate savings of £1.6 billion by 2031, the report states that there is an “inherent risk” that the algorithms which flag benefits claims for review could be biassed “due to unforeseen bias in the input data or the design of the model itself.”
While the report says that the DWP has “tight governance and control” of its machine learning and has put safeguards in place, the DWP’s ability to test for unfair impacts across protected characteristics is “currently limited.” This is due to claimants not always answering the optional demographics-focused questions when making a benefits claim.
Alison Garnham, the Chief Executive of the Child Poverty Action Group said that “Expanding the technology while ignoring calls for transparency and rigorous monitoring of and protections against bias will risk serious harm to vulnerable families,” in a comment made to the BBC.
In spite of the “challenge in balancing transparency over how it uses machine learning to provide public confidence in the benefit system with protecting its capabilities by not tipping off fraudsters about how it tackles fraud,” the report suggests that the DWP “should be able to provide assurance that it is not unfairly treating any group of customers.”