Incident 101: How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud

Description: A childcare benefits system in the Netherlands falsely accused thousands of families of fraud, in part due to an algorithm that treated having a second nationality as a risk factor.
Alleged: Unknown developed an AI system deployed by Dutch Tax Authority, which harmed Dutch Tax Authority and Dutch families.

Suggested citation format

Anonymous. (2018-09-01) Incident Number 101. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
101
Report Count
3
Incident Date
2018-09-01
Editors
Sean McGregor, Khoa Lam

Tools

New ReportNew ReportDiscoverDiscover

Incidents Reports

Last month, Prime Minister of the Netherlands Mark Rutte—along with his entire cabinet—resigned after a year and a half of investigations revealed that since 2013, 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm.

Forced to pay back money they didn’t owe, many families were driven to financial ruin, and some were torn apart. Others were left with lasting mental health issues; people of color were disproportionately the victims.

After relentless investigative reporting and a string of parliamentary hearings both preceding and following the mass resignations, the role of algorithms and automated systems in the scandal became clear, revealing how an austere and punitive war on low-level fraud was automated, leaving little room for accountability or basic human compassion. Even more, the automated system discriminated on the basis of nationality, flagging people with dual nationalities as being likely fraudsters.

The childcare benefits scandal (kinderopvangtoeslagaffaire in Dutch) is a cautionary tale of the havoc that black box algorithms can wreak, especially when they are weaponized to target society’s most vulnerable. It’s a problem that is not unique to the Netherlands: the Australian government faced its own “robodebt” scandal when its automated system for flagging benefits fraud stole nearly $1 billion from hundreds of thousands of innocent people. That case, too, came down to a poorly-designed algorithm without human oversight, an extension of cruel austerity politics with inexpressible collateral damage.

Here’s how the scandal unfolded.

Parents generally have to pay for childcare in the Netherlands. However, based on a parent’s income, the Dutch state reimburses a portion of the costs at the end of each month.

The fear of people gaming welfare systems is far from new and not particular to the Netherlands, but the rise of xenophobic far-right populism has placed it centerstage in the national political discourse. Anti-immigrant politics have become increasingly normalized, and immigrants are often painted as a threat to the Dutch welfare state.

Following this, a hardline stance regarding benefits fraud has become mostly politically uniform (even among many left-wing parties) over the past decade.

“Who pays the bill?” asked Geert Wilders, leader of the anti-immigrant Dutch Party for Freedom (the second largest party in the country), during a speech in 2008. “It’s the people of the Netherlands, the people who work hard, who properly save money and properly pay their taxes. The regular Dutch person does not receive money as a gift. Henk and Ingrid pay for Mohammed and Fatima.”

What followed was essentially a take-no-prisoners war on benefits and welfare fraud. Like many nations, the Netherlands has long automated aspects of its welfare system paired with human oversight and review. From 2013 on (though these techniques could have been used earlier), authorities used algorithms to create risk profiles of residents who were supposedly more likely to commit fraud and then used automated systems with little oversight to scan through benefits applicants and flag likely fraudsters who were then forced to pay money they didn’t owe in reality.

Before the increased use of automated systems, the decision to cut off a family from benefits payments would have to go through extensive review, said Marlies van Eck, an assistant professor at Radboud University who researches automated decision making in government agencies and who previously worked for the national benefits department. Now, such choices have increasingly been left to algorithms, or algorithms themselves have acted as their own form of review.

“Suddenly, with technology in reach, benefits decisions were made in a really unprecedented manner,” she said. “In the past, if you worked for the government with paper files, you couldn’t suddenly decide from one moment to the next to stop paying people benefits.”

After years of denial, an investigation from the Dutch Data Protection Authority found that these algorithms were inherently discriminatory because they took variables such as whether someone had a second nationality into account.

As devastating as the scandal is, it treads familiar territory. The Netherlands continues to pilot discriminatory predictive-policing technology that perpetuates ethnic profiling, for example.

Marc Schuilenburg is a professor in sociology and criminology at Vrije University in Amsterdam and author of the book Hysteria: Crime, Media, and Politics. Having spent a significant portion of his career studying the use of predictive policing algorithms, he argues that the child benefits scandal has to be seen within the context of this cultural and political shift towards punitive populism.

“The toeslagenaffaire [benefits scandal] is not an isolated problem,” Schuilenburg told Motherboard over Zoom. “It fits into a long tradition in the Netherlands of security policies that are designed to make clear that the days of tolerance are over, and that we are locked into this fight to the death with crimes such as welfare fraud. This fits into this whole notion of populist hysteria which I discuss in my book.”

“You see that these policies are spoken of in terms of war, in a hysterical military vocabulary—‘there is a war against fraud,’” he continued. “Through this language and these policies this brand of punishment populism prevails.”

For those classified by the automated system as a fraudster, few properly-done follow-up investigations meant that there was little recourse. In some cases, even something as simple as forgetting a signature landed families with the effectively irremovable label of having committed fraud. Once that label was there, they were forced to retroactively pay the government back for all the childcare benefits they had received, which amounted to thousands of euros for many and in some cases even tens of thousands of euros.

An investigation from Dutch daily newspaper Trouw also found that parents accused of fraud were given the label of “intent / gross negligence”, meaning that they weren’t even eligible for a payment scheme to gradually pay off their already false debts.

Victims were locked out of other benefits as well, such as the housing allowance and healthcare allowance. Many were forced to file for bankruptcy. The results were catastrophic.

“I believe in the States you have this saying ‘you’re one paycheck away from being homeless?’” van Eck told Motherboard over the phone. “Yeah, well that’s basically what we saw in this affair.”

“If you miss two or three months of payments, especially for the child care benefits, you may have to quit your job,” she explained. If someone quit their job to care for their children as a result, she said, they’d end up having financial difficulties. “There was this huge snowball effect because everything is connected with each other in the Netherlands. It was horrible.”

In one of the more egregious examples of the lack of humanity in the authorities’ approach, a report from Trouw revealed that the tax office had baselessly applied the mathematical Pareto principle to their punishments, assuming without evidence that 80 percent of the parents investigated for fraud were guilty and 20 percent were innocent.

The victims of the overzealous tax authorities were disproportionately people of color, highlighting how algorithms can perpetuate discriminatory structures and institutional racism.

According to Nadia Benaissa—a policy advisor at the digital rights group Bits of Freedom—, the fraud detection systems using variables like nationality can create problematic feedback loops similar to how predictive policing algorithms built on flawed assumptions can create a self-fulfilling prophecy that leads to the over-policing of minority groups.

Crucially, she said, we should place blame on the human individuals behind the creation and use of the algorithm rather than reify the technology as being the main driver.

“Systems and algorithms are human-made, and do exactly what they’ve been instructed to do,” she said. “They can act as an easy and sort of cowardly way to take the blame off yourself. What’s important to understand is that often algorithms can use historical data without any proper interpretation of the context surrounding that data. With that pattern, you can only expect social problems like institutional racism to increase. The result is a sort of feedback loop that increases social problems that already exist.”

While some efforts to increase algorithmic transparency have been made recently (such as an algorithm register from the municipality of Amsterdam), many of the automated systems in use in society remain opaque, said van Eck, even for researchers.

“Transparency is certainly a major issue. As a researcher, it’s difficult because these algorithms remain invisible,” van Eck said. “If I want to know something, I have trouble finding a person who can talk to me about it. And, if they just buy a software system, it could be that nobody actually knows how it works. Transparency is important not just for citizens, but also on the administrative level.”

Beyond transparency, safeguards and accountability are especially important when algorithms are given enormous power over people's livelihoods, but as of now little of that exists in the Netherlands. And, in the meantime, smart algorithms and automated systems continue to take over a larger and larger share of administrative procedures.

For now, the families wrongly accused of fraud are waiting to be given €30,000 each in compensation, but that won’t be enough to make up for the divorces, broken homes, and the psychological toll that resulted from the affair.

Meanwhile, despite the gravity of scandal, the resignation of Mark Rutte and his cabinet is largely symbolic. Though he resigned, he is still leading the government in the meantime and will be on the ballot in the national elections scheduled for next month. Rutte’s conservative VVD party is expected to win handily, meaning that both he and many of his ministers will likely return to their posts.

At the end of every government scandal, the words “never again” are thrown around a lot, but the combination of few strong ethical safeguards for algorithms and an increasingly automated government leaves those words with little meaning.

How a Discriminatory Algorithm Wrongly Accused Thousands of Families of Fraud

Chermaine Leysner’s life changed in 2012, when she received a letter from the Dutch tax authority demanding she pay back her child care allowance going back to 2008. Leysner, then a student studying social work, had three children under the age of 6. The tax bill was over €100,000. 

“I thought, ‘Don’t worry, this is a big mistake.’ But it wasn’t a mistake. It was the start of something big,” she said. 

The ordeal took nine years of Leysner’s life. The stress caused by the tax bill and her mother’s cancer diagnosis drove Leysner into depression and burnout. She ended up separating from her children’s father. “I was working like crazy so I could still do something for my children like give them some nice things to eat or buy candy. But I had times that my little boy had to go to school with a hole in his shoe,” Leysner said. 

Leysner is one of the tens of thousands of victims of what the Dutch have dubbed the “toeslagenaffaire,” or the child care benefits scandal. 

In 2019 it was revealed that the Dutch tax authorities had used a self-learning algorithm to create risk profiles in an effort to spot child care benefits fraud. 

Authorities penalized families over a mere suspicion of fraud based on the system’s risk indicators. Tens of thousands of families — often with lower incomes or belonging to ethnic minorities — were pushed into poverty because of exorbitant debts to the tax agency. Some victims committed suicide. More than a thousand children were taken into foster care. 

The Dutch tax authorities now face a new €3.7 million fine from the country's privacy regulator. In a statement released April 12, the agency outlined several violations of the EU's data protection rulebook, the General Data Protection Regulation, including not having a legal basis to process people's data and hanging on to the information for too long.

Aleid Wolfsen, the head of the Dutch privacy authority, called the violations unprecedented.

"For over 6 years, people were often wrongly labeled as fraudsters, with dire consequences ... some did not receive a payment arrangement or you were not eligible for debt restructuring. The tax authorities have turned lives upside down," he said, according to the statement.

As governments around the world are turning to algorithms and AI to automate their systems, the Dutch scandal shows just how utterly devastating automated systems can be without the right safeguards. The European Union, which positions itself as the world’s leading tech regulator, is working on a bill that aims to curb algorithmic harms. 

But critics say the bill misses the mark and would fail to protect citizens from incidents such as what happened in the Netherlands. 

No checks and balances

The Dutch system — which was launched in 2013 — was intended to weed out benefits fraud at an early stage. The criteria for the risk profile were developed by the tax authority, reports Dutch newspaper Trouw. Having dual nationality was marked as a big risk indicator, as was a low income. 

Why Leysner ended up in the situation is unclear. One reason could be that she had twins, which meant she needed more support from the government. Leysner, who was born in the Netherlands, also has Surinamese roots. 

In 2020, Trouw and another Dutch news outlet, RTL Nieuws revealed that the tax authorities also kept secret blacklists of people for two decades, which tracked both credible and unsubstantiated “signals” of potential fraud. Citizens had no way of finding out why they were on the list or defending themselves.

An audit showed that the tax authorities focused on people with “a non-Western appearance,” while having Turkish or Moroccan nationality was a particular focus. Being on the blacklist also led to a higher risk score in the child care benefits system. 

A parliamentary report into the child care benefits scandal found several grave shortcomings, including ​​institutional biases and authorities hiding information or misleading the parliament about the facts. Once the full scale of the scandal came to light, Prime Minister Mark Rutte's government resigned, only to regroup 225 days later.

In addition to the penalty announced April 12, the Dutch data protection agency also fined the Dutch tax administration €2.75 million in December 2021 for the “unlawful, discriminatory and therefore improper manner” in which the tax authority processed data on the dual nationality of child care benefit applicants. 

“There was a total lack of checks and balances within every organization of making sure people realize what was going on,” said Pieter Omtzigt, an independent member of the Dutch parliament who played a pivotal role in uncovering the scandal and grilling the tax authorities. 

“What is really worrying me is that I’m not sure that we’ve taken even vaguely enough preventive measures to strengthen our institutions to handle the next derailment,” he continued.

The new Rutte government has pledged to create a new algorithm regulator under the country’s data protection authority. Dutch Digital Minister Alexandra van Huffelen — who was previously the finance minister in charge of the tax authority — told POLITICO that the data authority’s role will be “to oversee the creation of algorithms and AI, but also how it plays out when it’s there, how it’s treated, make sure that is human-centered, and that it does apply to all the regulations that are in use.” The regulator will scrutinize algorithms in both the public and private sectors. 

Van Huffelen stressed the need to make sure humans are always in the loop. “What I find very important is to make sure that decisions, governmental decisions based on AI are also always treated afterwards by a human person,” she said.  

A warning to the rest of Europe

Europe’s top digital official, European Commission Executive Vice President Margrethe Vestager, said the Dutch scandal is exactly what every government should be scared of. 

“We have huge public sectors in Europe. There are so many different services where decision-making supported by AI could be really useful, if you trust it,” Vestager told the European Parliament in March. The EU’s new AI Act is aimed at creating that trust, she argued, “so that this big public sector market will be open also for artificial intelligence.” 

The Commission’s proposal for the AI Act restricts the use of so-called high-risk AI systems and bans certain “unacceptable” uses. Companies providing high-risk AI systems have to meet certain EU requirements. The AI Act also creates a public EU register of such systems in an effort to improve transparency and help with enforcement. 

That’s not good enough, argues Renske Leijten, a Socialist member of the Dutch parliament and another key politician who helped uncover the true scale of the scandal. Leijten argues that the AI Act should also apply to those using high-risk AI systems in both the private and public sectors. 

In the AI Act, “we see that there are more guarantees for your rights when companies and private enterprises are working with AI. But the important thing we must learn out of the child care benefit scandal is that this was not an enterprise or private sector … This was the government,” she said. 

As it is now, the AI Act will not protect citizens from similar dangers, said Dutch Green MEP Kim van Sparrentak, a member of the European Parliament’s AI Act negotiating team on the internal market committee. Van Sparrentak is pushing for the AI Act to have fundamental rights impact assessments that will also be published in the EU’s AI register. Parliament is also proposing adding obligations to the users of high-risk AI systems, including in the public sector. 

“Fraud prediction and predictive policing based on profiling should just be banned. Because we have seen only very bad outcomes and not a single person can be determined based on some of their data,” van Sparrentak said. 

In a report detailing how the Dutch government used ethnic profiling in the child care benefits scandal, Amnesty International calls on governments to ban the “use of data on nationality and ethnicity when risk-scoring for law enforcement purposes in the search of potential crime or fraud suspects.” 

The Netherlands is still reckoning with the aftermath of the scandal. The government has promised to pay back victims of the incident €30,000. But for those like Leysner, that doesn't even begin to cover the years she lost — justice seems like a long way off.

“If you go through things like this, you also lose your trust in the government. So it's very difficult to trust what [authorities] say right now,” Leysner said.

This article has been updated with the results of the Dutch tax authorities’ investigation released in April.

Dutch scandal serves as a warning for Europe over risks of using algorithms

Until recently, it wasn’t possible to say that AI had a hand in forcing a government to resign. But that’s precisely what happened in the Netherlands in January 2021, when the incumbent cabinet resigned over the so-called kinderopvangtoeslagaffaire: the childcare benefits affair.

When a family in the Netherlands sought to claim their government childcare allowance, they needed to file a claim with the Dutch tax authority. Those claims passed through the gauntlet of a self-learning algorithm, initially deployed in 2013. In the tax authority’s workflow, the algorithm would first vet claims for signs of fraud, and humans would scrutinize those claims it flagged as high risk.

In reality, the algorithm developed a pattern of falsely labeling claims as fraudulent, and harried civil servants rubber-stamped the fraud labels. So, for years, the tax authority baselessly ordered thousands of families to pay back their claims, pushing many into onerous debt and destroying lives in the process.

“When there is disparate impact, there needs to be societal discussion around this, whether this is fair. We need to define what ‘fair’ is,” says Yong Suk Lee, a professor of technology, economy, and global affairs at the University of Notre Dame, in the United States. “But that process did not exist.”

Postmortems of the affair showed evidence of bias. Many of the victims had lower incomes, and a disproportionate number had ethnic minority or immigrant backgrounds. The model saw not being a Dutch citizen as a risk factor.

“The performance of the model, of the algorithm, needs to be transparent or published by different groups,” says Lee. That includes things like what the model’s accuracy rate is like, he adds.

The tax authority’s algorithm evaded such scrutiny; it was an opaque black box, with no transparency into its inner workings. For those affected, it could be nigh impossible to tell exactly why they had been flagged. And they lacked any sort of due process or recourse to fall back upon.

“The government had more faith in its flawed algorithm than in its own citizens, and the civil servants working on the files simply divested themselves of moral and legal responsibility by pointing to the algorithm,” says Nathalie Smuha, a technology legal scholar at KU Leuven, in Belgium.

As the dust settles, it’s clear that the affair will do little to halt the spread of AI in governments—60 countries already have national AI initiatives. Private-sector companies no doubt see opportunity in helping the public sector. For all of them, the tale of the Dutch algorithm—deployed in an E.U. country with strong regulations, rule of law, and relatively accountable institutions—serves as a warning.

“If even within these favorable circumstances, such a dangerously erroneous system can be deployed over such a long time frame, one has to worry about what the situation is like in other, less regulated jurisdictions,” says Lewin Schmitt, a predoctoral policy researcher at the Institut Barcelona d’Estudis Internacionals, in Spain.

So, what might stop future wayward AI implementations from causing harm?

In the Netherlands, the same four parties that were in government prior to the resignation have now returned to government. Their solution is to bring all public-facing AI—both in government and in the private sector—under the eye of a regulator in the country’s data authority, which a government minister says would ensure that humans are kept in the loop.

On a larger scale, some policy wonks place their hope in the European Parliament’s AI Act, which puts public-sector AI under tighter scrutiny. In its current form, the AI Act would ban some applications, such as government social-credit systems and law enforcement use of face recognition, outright.

Something like the tax authority’s algorithm would abide, but due to its public-facing role in government functions, the AI Act would have marked it a high-risk system. That means that a broad set of regulations would apply, including a risk-management system, human oversight, and a mandate to remove bias from the data involved.

“If the AI Act had been put in place five years ago, I think we would have spotted [the tax algorithm] back then,” says Nicolas Moës, an AI policy researcher in Brussels for the Future Society think tank.

Moës believes that the AI Act provides a more concrete scheme for enforcement than its overseas counterparts, such as the one that recently took effect in China—which focuses less on public-sector use and more on reining in private companies’ use of customers’ data—and proposed U.S. regulations that are currently floating in the legislative ether.

“The E.U. AI Act is really kind of policing the entire space, while others are still kind of tackling just one facet of the issue, very softly dealing with just one issue,” says Moës.

Lobbyists and legislators are still busy hammering the AI Act into its final form, but not everyone believes that the act—even if it’s tightened—will go far enough.

“We see that even the [General Data Protection Regulation], which came into force in 2018, is still not properly being implemented,” says Smuha. “The law can only take you so far. To make public-sector AI work, we also need education.”

That, she says, will need to come through properly informing civil servants of an AI implementation’s capabilities, limitations, and societal impacts. In particular, she believes that civil servants must be able to question its output, regardless of whatever temporal or organizational pressures they might face.

“It’s not just about making sure the AI system is ethical, legal, and robust; it’s also about making sure that the public service in which the AI system [operates] is organized in a way that allows for critical reflection,” she says.

The Dutch Tax Authority Was Felled by AI—What Comes Next?