Incident 115: Genderify’s AI to Predict a Person’s Gender Revealed by Free API Users to Exhibit Bias
Description: A company's AI predicting a person's gender based on their name, email address, or username was reported by its users to show biased and inaccurate results.
Entities
View all entitiesAlleged: Genderify developed and deployed an AI system, which harmed Genderify customers and gender minority groups.
Incident Stats
Incident ID
115
Report Count
3
Incident Date
2020-07-28
Editors
Sean McGregor, Khoa Lam
Incident Reports
Reports Timeline
:format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/67125757/Ec0GHZ0XYAQ_zau.0.jpeg)
Some tech companies make a splash when they launch, others seem to bellyflop.
Genderify, a new service that promised to identify someone’s gender by analyzing their name, email address, or username with the help AI, looks firmly to be in th…

Just hours after making waves and triggering a backlash on social media, Genderify — an AI-powered tool designed to identify a person’s gender by analyzing their name, username or email address — has been completely shut down.
Launched last…

The creators of a controversial tool that attempted to use AI to predict people's gender from their internet handle or email address have shut down their service after a huge backlash.
The Genderify app launched this month, and invited peop…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.