Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 835

Associated Incidents

Incident 479 Report
LinkedIn Search Prefers Male Names

Loading...
LinkedIn’s search algorithm apparently favored men until this week
qz.com · 2016

Until Sep. 7, LinkedIn users searching for female contacts on the site may have noticed some strange results. Searches for common female names were yielding suggestions for male names as well.

Take a LinkedIn search for “Stephanie Williams.” Earlier this week, that query returned the result, “did you mean Stephen Williams?” (in addition to the 2,500-plus users actually named Stephanie Williams). A search for “Stephen Williams,” however, simply displayed the 7,200 results for people with that name.

The same was true of searches for at least a dozen other popular female first names in the US, a Seattle Times investigation revealed. LinkedIn wondered whether users searching for Andrea meant Andrew, Danielle meant Daniel, and Alexa meant Alex. Searches for the US’ 100 most common males names didn’t return suggestions for female names.

LinkedIn’s ”did you mean” results are fueled by an algorithm designed to suggest names with similar spellings. The algorithm makes recommendations based on how frequently names have shown up in past queries of the company’s more than 450 million member profiles, says spokesperson Suzi Owens. “It is not anything to do with gender,” she said.

All the same, on Sept. 7 the Silicon Valley-based company rolled out a change to the algorithm that enables it to explicitly recognize popular names as such, so that the algorithm doesn’t try to correct them.

It appears to be working: Searches for first names like Dana, Joan, Danielle, Alexa, and Stephanie no longer return any “did you mean” results.

The issue underscores the biases present in artificially intelligent systems that learn from other users’ behavior. Earlier this year, Microsoft was forced to take its millennial chatbot offline after it learned to make racist and sexist remarks from users on Twitter. (Microsoft is also acquiring LinkedIn.)

“As with all machine learned systems, there are always edge cases and we are constantly working hard to improve and create the best possible experience for our members,” says Owens.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd