Incident 49: AI Beauty Judge Did Not Like Dark Skin

Description: In 2016, after artificial inntelligence software Beauty.AI judged an international beauty contest and declared a majority of winners to be white, researchers found that Beauty.AI was racially biased in determining beauty.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Youth Laboratories developed and deployed an AI system, which harmed People with Dark Skin.

Incident Stats

Incident ID
49
Report Count
10
Incident Date
2016-09-05
Editors
Sean McGregor

CSETv0 Taxonomy Classifications

Taxonomy Details

Full Description

In 2016, Beauty.AI, an artificial intelligence software designed by Youth Laboratories and supported by Microsoft, was used to judge the first international beauty coontest. Of the 600,000 contestants who submitted selfies to be judged by Beauty.AI, the artificial intelligence software choose 44 winners, of which a majority were white, a handful were Asian, and only one had dark skin. While a majority of contestants were white, approximately 40,000 submissions were from Indians and another 9,000 were from Africans. Controversy ensued that Beauty.AI is racially biased as it was not sufficiently trained with images of people of color in determining beauty.

Short Description

In 2016, after artificial inntelligence software Beauty.AI judged an international beauty contest and declared a majority of winners to be white, researchers found that Beauty.AI was racially biased in determining beauty.

Severity

Negligible

Harm Distribution Basis

Race

Harm Type

Harm to intangible property

AI System Description

artificial intelligence software that uses deep learning algorithms to evaluate beauty based on factors such as symmetry, facial blemishes, wrinkles, estimated age and age appearance, and comparisons to actors and models

System Developer

Youth Laboratories

Sector of Deployment

Arts, entertainment and recreation

Relevant AI functions

Perception, Cognition, Action

AI Techniques

Deep learning, open-source

AI Applications

biometrics, image classification

Location

Global

Named Entities

Youth Laboratories, Microsoft

Technology Purveyor

Youth Laboratories, Microsoft, Insilico Medicine

Beginning Date

1/2016

Ending Date

6/2016

Near Miss

Unclear/unknown

Intent

Accident

Lives Lost

No

Data Inputs

images of people's faces

Why An AI-Judged Beauty Contest Picked Nearly All White Winners
motherboard.vice.com · 2016

Image: Flickr/Veronica Jauriqui

Beauty pageants have always been political. After all, what speaks more strongly to how we see each other than which physical traits we reward as beautiful, and which we code as ugly? It wasn't until 1983 tha…

A beauty contest was judged by AI and the robots didn't like dark skin
theguardian.com · 2016

The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners

The first international beauty contest judged by “machines” was supposed to use objecti…

Is AI RACIST? Robot-judged beauty contest picks mostly white winners out of 6,000 contestants
dailymail.co.uk · 2016

Only a few winners were Asian and one had dark skin, most were white

Just months after Microsoft's Tay artificial intelligence sent racist messages on Twitter, another AI seems to have followed suit.

More than 6,000 selfies of individuals w…

A beauty contest was judged by AI and the robots didn't like dark skin
theguardian.com · 2016

The first international beauty contest decided by an algorithm has sparked controversy after the results revealed one glaring factor linking the winners

The first international beauty contest judged by “machines” was supposed to use objecti…

The first AI-judged beauty contest taught us one thing: Robots are racist
thenextweb.com · 2016

With more than 6,000 applicants from over 100 countries competing, the first international beauty contest judged entirely by artificial intelligence just came to an end. The results are a bit disheartening.

The team of judges, a five robot …

AI judges of beauty contest branded racist
trustedreviews.com · 2016

It’s not the first time artificial intelligence has been in the spotlight for apparent racism, but Beauty.AI’s recent competition results have caused controversy by clearly favouring light skin.

The competition, which ran online and was ope…

The First Ever Beauty Contest Judged by Artificial Intelligence
gineersnow.com · 2017

If you’re one who joins beauty pageants or merely watches them, what would you feel about a computer algorithm judging a person’s facial attributes? Perhaps we should ask those who actually volunteered to be contestants in a beauty contest …

What Will Happen When Your Company’s Algorithms Go Wrong?
hbr.org · 2017

An AI designed to do X will eventually fail to do X. Spam filters block important emails, GPS provides faulty directions, machine translations corrupt the meaning of phrases, autocorrect replaces a desired word with a wrong one, biometric s…

Artificial Intelligence Has a Racism Issue
innotechtoday.com · 2017

It’s long been thought that robots equipped with artificial intelligence would be the cold, purely objective counterpart to humans’ emotional subjectivity. Unfortunately, it would seem that many of our imperfections have found their way int…

Artificial Intelligence Has a Bias Problem, and It's Our Fault
au.pcmag.com · 2018

In 2016, researchers from Boston University and Microsoft were working on artificial intelligence algorithms when they discovered racist and sexist tendencies in the technology underlying some of the most popular and critical services we us…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Gender Biases in Google Translate

· 10 reports
TayBot

TayBot

· 28 reports