Associated Incidents
Open up the photo app on your phone and search “dog,” and all the pictures you have of dogs will come up. This was no easy feat. Your phone knows what a dog “looks” like.
This modern-day marvel is the result of machine learning, a form of artificial intelligence. Programs like this comb through millions of pieces of data and make correlations and predictions about the world. Their appeal is immense: Machines can use cold, hard data to make decisions that are sometimes more accurate than a human’s.
But machine learning has a dark side. If not used properly, it can make decisions that perpetuate the racial biases that exist in society. It’s not because the computers are racist. It’s because they learn by looking at the world as the way it is, not as it ought to be.
Recently, newly elected Rep. Alexandria Ocasio-Cortez (D-NY) made this point in a discussion at a Martin Luther King Jr. Day event in New York City.
“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” she told writer Ta-Nehisi Coates at the annual MLK Now event. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”
The next day, the conservative website the Daily Wire derided the comments.
But Ocasio-Cortez is right, and it’s worth reflecting on why.
If we’re not careful, AI will perpetuate bias in our world. Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does, as computer scientist Aylin Caliskan, now at George Washington University, told me in a 2017 interview. The computers learn from their creators — us.
“Many people think machines are not biased,” Caliskan, who was at Princeton at the time, said. “But machines are trained on human data. And humans are biased.”
We think artificial intelligence is impartial. Often, it’s not.
Nearly all new consumer technologies use machine learning in some way. Take Google Translate: No person instructed the software to learn how to translate Greek to French and then to English. It combed through countless reams of text and learned on its own. In other cases, machine learning programs make predictions about which résumés are likely to yield successful job candidates, or how a patient will respond to a particular drug.
Machine learning is a program that sifts through billions of data points to solve problems (such as “can you identify the animal in the photo”), but it doesn’t always make clear how it has solved the problem. And it’s increasingly clear these programs can develop biases and stereotypes without us noticing.
In 2016, ProPublica published an investigation on a machine learning program that courts use to predict who is likely to commit another crime after being booked. The reporters found that the software rated black people at a higher risk than whites.
“Scores like this — known as risk assessments — are increasingly common in courtrooms across the nation,” ProPublica explained. “They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts … to even more fundamental decisions about defendants’ freedom.”
The program learned about who is most likely to end up in jail from real-world incarceration data. And historically, the real-world criminal justice system has been unfair to black Americans.
This story reveals a deep irony about machine learning. The appeal of these systems is they can make impartial decisions, free of human bias. “If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long,” ProPublica wrote.
But what happened was that machine learning programs perpetuated our biases on a large scale. So instead of a judge being prejudiced against African Americans, it was a robot.
Other cases are more ambiguous. In China, researchers paired facial recognition technology with machine learning to look at driver’s license photos and predict who is a criminal. It purported to have an accuracy of 89.5 percent.
Many experts were extremely skeptical of the findings. Which facial features were this program picking up on for the analysis? Was it the physical features of certain ethnic groups that are discriminated against in the justice system? Is it picking up on the signs of a low-socioeconomic upbringing that may leave lasting impressions on our faces?
It can be hard to know. (Scarier: There’s one startup called Faception that claims it can detect terrorists or pedophiles just by looking at faces.)
“You got the algorithms which are super powerful, but just as important is what kind of data you feed the algorithms to teach them to discriminate,” Princeton psychologist and facial perception expert Alexander Todorov told me in a 2017 interview, while discussing a controversial paper on using machine learning to predict sexual orientation from faces. “If