Associated Incidents

Amazon’s AI gurus scrapped a new machine-learning recruiting engine earlier this month. Why? It transpired that the AI behind it was sexist. What does this mean as we race to produce ever-better artificial intelligence, and how can we understand the risks of machines learning the worst of our own traits?
Trained on the past decade’s worth of data about job applicants, the Amazon model began to penalize CVs that included the word “women.” The incident calls to mind another experiment in AI bias, Microsoft’s “Tay” project, which the company pulled from the web after the bot learned racism from users it was chatting with on GroupMe and Kik. In Amazon’s case, however, rogue users weren’t to blame. The AI was learning from the historical data of tech’s largest global company.
An artificial intelligence that dislikes women or people of color sounds like a concept straight out of a Twilight Zone episode. But sadly, it’s reality.
How did we get to this situation? And is it possible to build an AI that won’t reflect the bone-deep prejudices that are – knowingly or unknowingly – built into our social systems? To answer that second question, it’s crucial to address the first one.
How a Sexist AI Happens
Okay, the first point to make is that sexist or racist AI doesn’t emerge from nowhere. Instead, it reflects the prejudices already deeply held within both society at large, and the tech industry specifically.
Don’t believe us about sexism in tech? One study from earlier this year found that 57 out of 58 major U.S. cites paid women in tech less than men. Last year, two female tech cofounders demonstrated tech sexism at work by proving they could make better connections once they invented a fictional male cofounder.
And as long as tech companies continue overlooking sexism, they’ll keep perpetuating a system that prioritizes male applicants and promotes male staff.
Sexist AIs Start With a Blinkered Industry…
The tech world loves rapid growth above all else. But this year, it’s finally begun to come to terms with the impact that its culture can make, and a sense of responsibility is finally taking root.
Few sum it up better than former Reddit product head Dan McComas, whose recent New York Magazine interview (titled ‘I Fundamentally Believe That My Time at Reddit Made the World a Worse Place’) includes this insight:
“The incentive structure is simply growth at all costs. There was never, in any board meeting that I have ever attended, a conversation about the users, about things that were going on that were bad, about potential dangers, about decisions that might affect potential dangers. There was never a conversation about that stuff.”
…And Machine Learning Perpetuates Them
It’s this attitude that’s at the core of prejudiced AI, which perpetuates the system just as clearly, if a little more mechanically. As Lin Classon, director of public cloud strategy at Ensono, puts it, the process of machine learning is the issue.
“Currently, the most common application of AI is based on feeding the machine lots of data and teaching it to recognize a pattern. Because of this, the results are as good as the data used to train the algorithms,” she tells me.
Ben Dolmar, director of software development at the Nerdery, backs her up.
“Almost all of the significant commercial activity in Artificial Intelligence is happening in the field of Machine Learning,” explains Ben. “It was machine learning that drove Alpha Go and it’s machine learning that is driving the leaps that we’re making in natural language processing, computer vision and a lot of recommendations engines.”
Machine learning begins by providing a model with a core data set. The model trains on this before producing its own outputs. Any historical issues in the core data are then reproduced. Translation? Sexist data turns into sexist outputs.
“It’s not unlike painting with watercolors, where the brush has to be clean or it taints the colors,” says Classon. And, in modern society, sexism turns up everywhere, Classon says, whether it’s in “recruiting, loan applications or stock photos.” Or even in the emptiness of women’s restrooms at major tech conferences, as Classon has pointed out to Tech.Co before.
How to Combat AI Prejudice
How do we solve a problem like AI prejudice? Classon boils it down to a key guiding principle: conscientious and collective vigilance. And that begins with ensuring the community behind the AI developments is equipped to spot the issues. Which takes us right back to the core problem of ensuring that a diverse developer community is in place, to find issues faster and address them more quickly.
Practically speaking, Classon has further suggestions:
Increased Transparency
Right now, machine learning algorithms function like black boxes: Data goes in, trained models come out.
“DARPA has recognized [that this leaves users unaware of how the system came to any decision] and is working on Explainable Artificial Intelligence so future AI will be able to explain