Incident 235: Chinese Insurer Ping An Employed Facial Recognition to Determine Customers’ Untrustworthiness, Which Critics Alleged to Likely Make Errors and Discriminate

Description: Customers’ untrustworthiness and unprofitability were reportedly determined by Ping An, a large insurance company in China, via facial-recognition measurements of micro-expressions and body-mass indices (BMI), which critics argue was likely to make mistakes, discriminate against certain ethnic groups, and undermine its own industry.
Alleged: Ping An developed and deployed an AI system, which harmed Ping An customers and Chinese minority groups.

Suggested citation format

Lutz, Roman. (2016-04-15) Incident Number 235. in Lam, K. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
235
Report Count
1
Incident Date
2016-04-15
Editors
Khoa Lam

Reports Timeline

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscover

Incident Reports

China’s largest insurer, Ping An, has apparently started employing artificial intelligence to identify untrustworthy and unprofitable customers. It offers a chilling example of what, if we’re not careful, the future could look like here in the U.S.

The Wall Street Journal reported that Ping An is using facial recognition software to search for “micro-expressions” on people’s faces to help decide whether they’re being truthful, whether to insure them and presumably what the terms of service should be. Secondarily, the software will gauge people’s body-mass index and well-being to determine health insurance premiums (spoiler: it costs extra if you’re fat!).

If this doesn’t trouble you, consider my open-source “skeptical questions everyone should ask about a new AI business model,” which goes something like this:

  1. Is what the AI claims to do even possible, given the limitations of the data and the technology?

  2. If it’s possible, will it achieve its stated goal? Or will it merely perpetuate human bias under the guise of “science,” like a modern-day version of phrenology?

  3. Assuming it’s fit for purpose -- a big assumption, honestly, considering how badly most AI actually perform – is that purpose desirable in itself?

  4. Will it undermine the industry it was designed to aid?

Let’s assess Ping An’s facial recognition point by point. How would you train an algorithm to determine who is lying? One option is to show it a bunch of people lying and telling the truth in a laboratory setting. Problem is, people’s micro-expressions might be different when they’re lying in a lab, or getting paid to lie, or lying for the first time. So an algorithm trained on them would make a lot of mistakes in real life.

How about finding video footage of people actually lying, and showing that to the algorithm? The problem here is that most lies are never discovered. Not to mention that good liars might be better at controlling their expressions, or that different kinds of lies might look different. So I don’t think there’s any way to train the algorithm effectively.

But maybe finding liars isn’t Ping An’s true goal. Most likely they’re really looking for characteristics of people who end up making claims, which is what they want to avoid more than fraud. Which brings us to the second point: Will rejecting such customers also have an outsized effect on specific races or classes of people?

Most likely. The poor and downtrodden -- people living precarious, overworked lives -- tend to run into more problems, and hence have more insurance claims. And in China, human discrimination makes certain ethnic groups -- such as Uyghurs, the Muslim minority -- more likely to be poor and downtrodden (just as it does with blacks in the U.S.). So an algorithm trained to identify potential claimants would also discriminate against these people.

In a sense, though, the algorithm might still be fit for purpose -- assuming its purpose is to maximize profits by avoiding expensive customers, with no constraints for fairness or long-term community health. So, moving on to the third point, is that purpose desirable? For the creators of this algorithm, maybe. They seem to be OK with discriminating against fat people in the pursuit of profit, so why not the poor and marginalized, too?

It’s not the kind of world I would want to inhabit, which is why I think it serves as a cautionary tale. Here in the U.S., just one thing prevents insurance companies from discriminating against specific groups of customers: The requirement, contained in Obamacare, that they insure all comers, sick and healthy, for the same price. This was made tenable by the so-called individual mandate, which kept the pool of insured people as large as possible by requiring everyone, including the healthy, to buy insurance.

Now, though, Congress has effectively repealed the individual mandate, and state attorneys general -- along with the Trump administration -- are challenging the take-all-comers requirement in court. This means that in the very near future, insurers may be able to reject sick people (or people they expect to get sick) and accept only those customers who they don’t expect to use the insurance. To that end, they have been working furiously on big data and AI to differentiate expensive sick people from inexpensive, healthy people.

Insurance that excludes people who might need it is no longer insurance. Which answers the fourth question. So the worst part of Ping An’s terrible AI isn’t that it won’t work for its stated purpose. The real danger is that it might work too well.

China Knows How to Take Away Your Health Insurance