Citation record for Incident 19

Suggested citation format

Yampolskiy, Roman. (2013-01-23) Incident Number 19. in McGregor, S. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
Report Count
Incident Date
Editors
19
27
2013-01-23
Sean McGregor

CSET Taxonomy Classifications

Taxonomy Details

Full Description

Advertisements chosen by Google Adsense are reported as producing sexist and racist results. In a 2015 Carnegie Mellon study, 17,370 fake profiles were created to visit jobseeker sites, the profiles were shown around 600,000 advertisements. 1,852 male profiles received advertisements for high-paying executive jobs and career building while only 318 of the female profiles were shown the advertisements. Companies are allowed to filter who is shown their advertisements, which is attributed to this difference in male/female outcomes of advertising. In a separate instance, Harvard professor Latanya Sweeney released a 2013 study showing how black identifying names, when searched in Google, are more likely to return advertisments involving arrests. When testing 2,000 racially-sensitive names, black identifying names returned advertisements using the word "arrest" 81-95% of the time, while white identifying names did so 0-9% of the time. All of the ads were from www.instantcheckmate.com, implying, again, the company's choice of who to target their advertising toward played a factor in the discriminatory results.

Short Description

Advertisements chosen by Google Adsense are reported as producing sexist and racist results.

Severity

Unclear/unknown

Harm Distribution Basis

Race, Sex

Harm Type

Harm to social or political systems, Harm to civil liberties, Other:Reputational harm

AI System Description

Google Adsense, an algorithm used to target advertisements toward relevant audiences.

System Developer

Google

Sector of Deployment

Information and communication

Relevant AI functions

Perception, Cognition, Action

AI Techniques

Google Adsense

AI Applications

targeted advertising

Location

Global

Named Entities

Google, Harvard University, Carnegie Mellon University, www.instantcheckmate.com

Technology Purveyor

Google, Instant Checkmate

Beginning Date

2013-01-01T00:00:00.000Z

Ending Date

2015-01-01T00:00:00.000Z

Near Miss

Unclear/unknown

Intent

Unclear

Lives Lost

No

Data Inputs

Advertiser's preference, Google user's search history, Google user's purchase history

Incidents Reports

A Google search for a person's name, such as "Trevon Jones", may yield a personalized ad for public records about Trevon that may be neutral, such as "Looking for Trevon Jones?", or may be suggestive of an arrest record, such as "Trevon Jones, Arrested?". This writing investigates the delivery of these kinds of ads by Google AdSense using a sample of racially associated names and finds statistically significant discrimination in ad delivery based on searches of 2184 racially associated personal names across two websites. First names, assigned at birth to more black or white babies, are found predictive of race (88% black, 96% white), and those assigned primarily to black babies, such as DeShawn, Darnell and Jermaine, generated ads suggestive of an arrest in 81 to 86 percent of name searches on one website and 92 to 95 percent on the other, while those assigned at birth primarily to whites, such as Geoffrey, Jill and Emma, generated more neutral copy: the word "arrest" appeared in 23 to 29 percent of name searches on one site and 0 to 60 percent on the other. On the more ad trafficked website, a black-identifying name was 25% more likely to get an ad suggestive of an arrest record. A few names did not follow these patterns. All ads return results for actual individuals and ads appear regardless of whether the name has an arrest record in the company's database. The company maintains Google received the same ad text for groups of last names (not first names), raising questions as to whether Google's technology exposes racial bias.

Discrimination in Online Ad Delivery

Frequently Asked Questions

  1. Isn't the arrest rate of blacks higher anyway?

The ads appear regardless of whether the company sponsoring the ad has a criminal record for the name. The appearance of the ads are not related to any arrest statistics or the like.

  1. What is racism?

From the paper: "Racial discrimination results when a person or group of people is treated differently based on their racial origins [5]. Power is a necessary precondition, for it depends on the ability to give or withhold benefits, facilities, services, opportunities etc., from someone who should be entitled to them, and are denied on the basis of race. Institutional or structural racism is a system of procedures/patterns whose effect is to foster discriminatory outcomes or give preferences to members of one group over another [6]."

Notice that racism can result, even if not intentional.

The EEOC provides a test in cases of employment for a charge of discrimination. To make a determination, the EEOC uses an "adverse impact test," which measures whether practices, intentional or not, have a disproportionate effect. If the ratio of the effect on groups is less than 80 percent, the employer may be held responsible for discrimination. These ads are not necessarily used for employment, and the computation here is for reference; the appearance of the ads at both websites was 77 percent and 40 percent, both showing adverse impact.

  1. Who is to blame?

The current study documents and observes that there is discrimination in the delivery of the ads. We do not yet know why the discriminatory effect occurs. We have some work underway to help us better understand what may be happening. Did the company provide ads suggestive of arrest disproportionately to black-identifying names? Or, did the company provide roughly the same ads evenly across racially associated names but society clicked ads suggestive of arrest more often for black identifying names? Google uses cloud-caching strategies to deliver ads quickly, might these strategies bias ad delivery towards ads previously loaded in the cloud cache? Is there a combinatorial effect?

  1. What is a black-sounding name?

The study uses first names to predict race. First names that have the highest ratio of frequency in one racial group to frequency in the other racial group can be racially identifying. The first names used in the study came from earlier research which computed comparative frequencies from birth records. The study compared the online images of people having these first names to the race predicted and found these first names predictive of race (88 percent black, 96 percent white). The paper provides a complete breakdown by first name.

As examples, people having the first names DeShawn, Darnell and Jermaine, generated ads suggestive of an arrest in 81 to 86 percent of name searches on one website and 92 to 95 percent on the other, while those assigned at birth primarily to whites, such as Geoffrey, Jill and Emma, generated more neutral copy: the word "arrest" appeared in 23 to 29 percent of name searches on one site and 0 to 60 percent on the other. A few names did not follow these patterns: Dustin, a name predominantly given to white babies, generated an ad suggestive of arrest 81 and 100 percent of the time. The paper provides a complete breakdown by first name.

  1. How can I see these ads?

Samples appear in the paper, which you can download here. Samples are also available in a slideshow at foreverdata.org.

The best way to view ad delivery right now, or to search what ads appear with your own name, is to go to a site that serves "Ads by Google". Ads appear much more often on these other sites than on google.com, a finding and rate reported in the paper. As of March 25, 2013, the ads no longer appear on Reuters.com, but they continue to appear at other sites hosting Ads by Google. Try entering a name in the search bar at chicagotribube.com.

Ads are based on the first and last names of real people. Ads suggestive of arrest may appear even if there is no criminal record for the person in the company's database.

Arrest ads continue to appear, based on a random check on March 26, 2013. Try entering a name in the search bar at chicagotribube.com.

  1. What is the harm?

Whenever someone queries your name in a search engine, one of these ads appear. Perhaps you are in competition for an award, an appointment, a promotion, or a new job, or maybe you are in a position of trust, such as a professor, a physician, a banker, a judge, a manager, or a volunteer, or perhaps you are completing a rental application, selling goods, applying for a loan, joining a social club, making new friends, dating, or engaged in any one of hundreds circumstances for which an online searcher seeks to learn more about you. Appearing alongside your list of accomplishments is an advertisement implying you may have a criminal record, whether you actually have one or not. Worse, the ads don't appear for your competitors.

  1. Wh
Discrimination in Online Ad Delivery

In 2013, Harvard professor Latanya Sweeney found that racial discrimination pervades online advertising delivery. In a study, she found that searches on black-identifying names such as Revon, Lakisha, and Darnell are 25% more likely to be served with an ad from Instant Checkmate offering a background check to find out whether the person has been arrested. The exact cause is difficult to pinpoint without greater insight into the inner workings of Google AdSense than the company is willing to grant. Both Google and Instant Checkmate denied that they engaged in racial profiling.

https://www.technologyreview.com/s/510646/racism-is-poisoning-online-ad-delivery-says-harvard-professor/

Writer: Technology Review

Publication: Technology Review

Publication date: 2013-02-04

Embedded racism determines online advertising placement

Names typically associated with black people are more likely to produce adverts related to criminal activity, according to the Harvard University paper .

A Google search for a name such as Tom Smith may bring up personalised public records, such as “Looking for Tom Smith”, or may be suggestive of an arrest record, such as “Tom Smith, arrested?”.

According to the research, names given primarily to black babies - such as DeShawn, Darnell and Jermaine - are more likely than those associated with white babies to produce adverts with links to website which offer criminal record checks.

The study analysed the type of advertisements that appeared on Google when certain names were searched for.

It looked at Google.com's core search engine, as well as the search function of Reuters.com - which also displays Google's advertising.

Prof Sweeney's investigation suggests that names linked with black people - as defined by a previous study into racial discrimination in the workplace - were 25 per cent more likely to have results that prompted the searcher to click on a link to search criminal record history.

Google's advertising algorithms are based on keywords and user behaviour. They learn over time which as text gets the most clicks from the viewers of the advertisement.

Prof Sweeney did concede that the study “raises more questions than it answers”.

She said that further work was needed, but added that the “basic message [of her study] does not change. There is discrimination in delivery of these ads”.

In a statement, Google said: "AdWords does not conduct any racial profiling. We also have an "anti" and violence policy which states that we will not allow ads that advocate against an organisation, person or group of people. It is up to individual advertisers to decide which keywords they want to choose to trigger their ads."

Google search results 'show racial bias'

“Have you ever been arrested? Imagine the question not appearing in the solitude of your thoughts as you read this paper, but appearing explicitly whenever someone queries your name in a search engine.”

Screenshot of a Google ad.

So begins Latanya Sweeney at Harvard University in a compelling paper arguing that racial discrimination plagues online ad delivery.

Many people will have experience Googling friends, colleagues and relatives to find out about their online presence—the websites on which they appear, their pictures, hobbies and so on.

Sweeney’s interest is in the ads that appear alongside these results. When she entered her name in Google an ad appeared with the wording:

“Latanya Sweeney, Arrested? 1) Enter name and state 2) Access full background. Checks instantly. www.instantcheckmate.com”

This is suggestive wording. It suggests that Latanya Sweeney has a criminal record the details of which can be accessed by clicking on the ad. But after hitting the link and paying the necessary subscription fee, Sweeney says she found no record of arrest.

What’s interesting about this is that Sweeney’s first name is also suggestive–that she is black. The question Sweeney asks is whether a similar search with a name suggestive of a white racial profile also serves up ads mentioning arrest records.

The answer is a powerful wake up call. Sweeney says she has evidence that black identifying names are up to 25 per cent more likely to be served with an arrest-related ad. “There is discrimination in delivery of these ads,” she concludes.

Sweeney gathered this evidence by collecting over 2000 names that were suggestive of race. For example, first names such as Trevon, Lakisha and Darnell suggest the owner is black while names like Laurie, Brendan and Katie suggest the owner is white.

She then entered these plus surnames into Google.com and Reuters.com and examined the ads they returned. Most names generated ads for public records. However, black-identifying names turned out to be much more likely than white-identifying names to generate ads that including the word “arrest” (60 per cent versus 48 per cent). All came from www.instantcheckmate.com.

She says the results are statistically significant with a less than 0.1 per cent chance that they were generated by chance.

On Reuters.com, black identifying names were 25 per cent more likely to be served with an arrest-related ad.

That’s an extraordinary result and one that raises more questions than it answers. The biggest puzzle, of course, is what causes the ads to be served up in this pattern. Here the mystery of Google’s Adsense service obscures matters considerably.

Sweeney says there are essentially three possibilities. One is that www.instantcheckmate.com has set up the arrest-mentioning ads to be served up to black identifying names. Another is that Google has somehow biased its ad serving mechanism in this way.

A more insidious explanation is that society as a whole is to blame. If Google’s Adsense service learns which ad combinations are more effective, it would first serve the arrest-related ads to all names at random. But this would change if it were to discover that click-throughs are more likely when these ads are served against a black-identifying name. In other words, the results merely reflect the discriminatory pattern of clicks from ordinary people.

Of course, we can’t know without greater insight into the black box that is Google Adsense.

Clearly Sweeney has discovered a serious problem here given the impact online presence can have an individual’s employment prospects.

Whatever the cause, Sweeney says technology may offer some kind of solution. If the algorithms behind Adsense can reason about maximising revenues, she says they ought to be able to reason about the legal and social consequences of certain patters of click-throughs.

That’s an interesting idea and one that Google, www.instantcheckmate.com and society in general ought to consider in more detail.

Ref: http://arxiv.org/abs/1301.6822 Discrimination in Online Ad Delivery

Update 4 Feb 2013 14:34 EST

In response to this blog post, a Google spokesperson sends the following comment:

“AdWords does not conduct any racial profiling. We also have an “anti” and violence policy which states that we will not allow ads that advocate against an organisation, person or group of people. It is up to individual advertisers to decide which keywords they want to choose to trigger their ads.”

Update 7 Feb

Instantcheckmate.com sends the following statement:

“As a point of fact, Instant Checkmate would like to state unequivocally that it has never engaged in racial profiling in Google AdWords.

We have absolutely no technology in place to even connect a name with a race and have never made any attempt to do so. The very idea is contrary to our company’s most deeply held principles and values.”

Racism is Poisoning Online Ad Delivery, Says Harvard Professor

Image caption Prof Sweeney said technology could be used to counteract racial intolerance

A study of Google searches has found "significant discrimination" in advert results depending on the perceived race of names searched for.

Harvard professor Latanya Sweeney said names typically associated with black people were more likely to produce ads related to criminal activity.

In her paper, Prof Sweeney suggested that Google searches may expose "racial bias in society".

Google has said it "does not conduct any racial profiling".

In a statement to the BBC, the company said: "We also have an 'anti' and violence policy which states that we will not allow ads that advocate against an organisation, person or group of people."

When placing ads with Google, companies are able to specify which keywords they would like to target.

"It is up to individual advertisers to decide which keywords they want to choose to trigger their ads," the search giant said.

Arrested?

The study analysed the type of advertisements that appeared on Google when certain names were searched for.

It looked at Google.com's core search engine, as well as the search function of Reuters.com - which also displays Google's advertising.

Prof Sweeney's investigation suggests that names linked with black people - as defined by a previous study into racial discrimination in the workplace - were 25% more likely to have results that prompted the searcher to click on a link to search criminal record history.

Technology can do more to thwart discriminatory effects and harmonise with societal norms Prof Latanya Sweeney, Harvard

She found that names like Leroy, Kareem and Keisha would yield advertisements that read "Arrested?", with a link to a website which could perform criminal record checks.

Searches for names such as Brad, Luke and Katie would not - instead more likely to offer websites that can provide general contact details.

"There is discrimination in the delivery of these ads," concluded Prof Sweeney, adding that there was a less than 1% chance that the findings could be based on chance.

"Alongside news stories about high school athletes and children can be ads bearing the child's name and suggesting arrest. This seems concerning on many levels."

User habits

However, she was reluctant to pinpoint a cause for the discrepancies, saying that to do so required "further information about the inner workings of Google AdSense".

She noted that one possible cause may be Google's "smart" algorithms - technology which automatically adapts advertising placement based on mass-user habits.

In other words, it may be that the search engines are reflecting society's own prejudices - as the advertising results Google serves up are often based on the most popular links previous users have clicked on.

"Over time, as people tend to click one version of ad text over others, the weights change," Prof Sweeney explained.

"So the ad text getting the most clicks eventually displays more frequently."

She argued that technology should be used to counteract this effect.

"In the broader picture, technology can do more to thwart discriminatory effects and harmonise with societal norms.

"Ads responding to name searches appear in a specific information context and technology controls that context."

Google searches expose racial bias, says study of names

Pop a name into Google and you're likely to end up with corresponding advertisements alongside your results. Wild guess which types of names are more likely to yield arrest-related ads suggesting that the person searched for has a record.

You got it — according to a Harvard University paper reported in MIT Technology Review on Monday, "black-identified" names lead to such potentially misleading and embarrassing results 25 percent more often than those that are "white-identified." Here's why researcher Latanya Sweney says "there is discrimination in delivery of these ads," and what she suggests might be done to fix it:

Many people will have experience Googling friends, colleagues and relatives to find out about their online presence — the websites on which they appear, their pictures, hobbies and so on.

Sweeney's interest is in the ads that appear alongside these results. When she entered her name in Google an ad appeared with the wording:

"Latanya Sweeney, Arrested? 1) Enter name and state 2) Access full background. Checks instantly. www.instantcheckmate.com"

This is suggestive wording. It suggests that Latanya Sweeney has a criminal record the details of which can be accessed by clicking on the ad. But after hitting the link and paying the necessary subscription fee, Sweeney says she found no record of arrest.

What's interesting about this is that Sweeney's first name is also suggestive — that she is black. The question Sweeney asks is whether a similar search with a name suggestive of a white racial profile also serves up ads mentioning arrest records.

The answer is a powerful wake up call. Sweeney says she has evidence that black identifying names are up to 25 per cent more likely to be served with an arrest-related ad. "There is discrimination in delivery of these ads," she concludes.

Sweeney gathered this evidence by collecting over 2000 names that were suggestive of race. For example, first names such as Trevon, Lakisha and Darnell suggest the owner is black while names like Laurie, Brendan and Katie suggest the owner is white …

Clearly Sweeney has discovered a serious problem here given the impact online presence can have an individual's employment prospects.

Whatever the cause, Sweeney says technology may offer some kind of solution. If the algorithms behind Adsense can reason about maximising revenues, she says they ought to be able to reason about the legal and social consequences of certain patters of click-throughs.

Google a 'Black' Name, Get an Arrest Ad?

Google accused of racism after black names are 25% more likely to bring up adverts for criminal records checks

Professor finds 'significant discrimination' in ad results, with black names 25 per cent more likely to be linked to arrest record check services

She compared typically black names like 'Ebony' and 'DeShawn' with typically white ones like 'Jill' and 'Geoffrey'

Google has been accused of racism after allegedly linking names usually associated with black people to adverts related to criminality.

A Harvard University professor found 'significant discrimination' after comparing the adverts which appear when searching a typically black name compared with those for typically white names.

Findings showed that names typically associated with black people were 25 per cent more likely to bring up adverts related to criminality.

Racist? A new study claims that adverts linked to Google search results for typically black American names are more likely to be suggestive of criminality - like services offering to check arrest and criminal records

The study by Latanya Sweeney contrasted online searches using names such as 'Ebony' and 'DeShawn,' with those such as 'Jill' and 'Geoffrey.'

She found that adverts posted alongside search results for names likely to belong to black people were more likely to offer services like background checks for arrests and criminal records.

Searches using white-sounding names were less likely to result in advert results which suggested criminality, Professor Sweeney's research indicated.

The findings are significant since Google searching the names of potential employees, clients or even friends and dates has become commonplace.

'Advantages of knowing such information when hiring or engaging with a person relate to trustworthiness,' Professor Sweeney writes in a paper published online in the journal arXiv .

Consequences: The findings are significant since Google searching the names of potential employees, clients or even friends and dates has become commonplace (file photo) Professor Sweeney gathered evidence by collecting more than 2,000 names which were suggestive of race.

She then entered these names plus surnames into Google and news agency Reuters' Google-powered search engine and looked at which adverts the search results returned.

While most names brought back adverts for public records, typically black names were much more likely to bring back those that included the word 'arrest'.

All the results came from background-checking service instantcheckmate.com. In one particular case highlighted by Professor Sweeney, a search for the black-sounding names Latanya Farrell, Latanya Sweeney and Latanya Lockett all brought up adverts for arrest checking services.

However, subsequent investigation showed only one of the names, Latanya Lockett, had an arrest record linked to it. Sample ad and criminal reports for 'latanya farrell' (a,b), 'latanya sweeney' (c,d), and 'latanya locket' (e,f)appearing on google.com and reuters.com Typically white names: Sample ads and criminal reports for 'kristen haring' (a), 'kristen sparrow' (b), and 'kristen lindquist' (c). Criminal reports from instantcheckmate.com (b,d,f)

COULD BRAIN SCANS BE USED TO PICK OUT RACISTS?

Brain scans could soon be used to detect whether or not people are racist, scientists say.

Researchers found that brain scans were able to pick up on differences in the way that people with implicit negative racial attitudes viewed black and white faces.

Racial stereotypes have previously been shown to have subtle and unintended consequences on how we treat members of different race groups.

But the new research published in Psychological Science, a journal of the Association for Psychological Science, shows race biases also increase differences in the brain's representations of faces.

Psychologists from the University of Geneva in Switzerland and New York University examined activity in the brain while participants looked at pictures of White and black faces.

Afterwards, participants performed a task that assessed their unconscious or implicit expression of race attitudes.

By examining patterns of brain activity in the fusiform face area — which is involved in face perception — researchers were able to predict the race of the person the participant was viewing, but only for those with strong, negative implicit race attitudes.

This, the researchers said, implies that people with stronger, negative implicit race attitudes may actually perceive black and white faces to look more different than others who held no such prejudice.

'In comparison, searches for “Kristen Haring”, “Kristen Sparrow” and “Kristen Lindquist” did not yield any instantcheckmate.com ads, only competitor ads, even though the company’s database reports having records for all

three names and arrest records for “Kristen Sparrow” and “Kristen Lindquist",' Professor Sweeney wrote.

She added: 'Together, these hand-picked examples describe the suspected pattern – ads suggesting

Google accused of racism after black names are 25% more likely to bring up adverts for criminal records checks

Is Google biasing the ads it serves up based on whether a name sounds "black"?

That's the conclusion of a paper by Harvard professor Latanya Sweeney, who wrote in her paper that searches on names that may be identified as black brought up ads for criminal background searches.

"Have you ever been arrested? Imagine the question not appearing in the solitude of your thoughts as you read this paper, but appearing explicitly whenever someone queries your name in a search engine," Sweeney wrote in the beginning of her paper, " Discrimination in online activity."

As reported in MIT Technology Review, Sweeney's search on her own name in Google prompted her to think more about how the search engine giant's ad delivery:

When she entered her name in Google an ad appeared with the wording: “Latanya Sweeney, Arrested? 1) Enter name and state 2) Access full background. Checks instantly. www.instantcheckmate.com”

Sweeney did searches for more than 2,000 names that were suggestive of race -- for example, "DeShawn, Darnelle, and Jermaine" for "black" names and "Geoffrey, Jill, and Emma" for "white" names.

Searches on the black identifying names served up ads with the word "arrest" 60 percent of the time, compared with 48 percent of the time for white identifying names, Sweeney found.

What do you make of Sweeney's research? Do you agree that Google is using race to determine which ads accompany searches on names?

Harvard professor says 'black' names in Google searches more likely to offer arrest ads

Readers, I hate it to break it to you, but according to Harvard the internet is racist. I suggest you stop using it immediately unless you want your patronage of Google et al to blacken your name. Actually, err, maybe wait until you finish reading..

A recent study of Google searches by Professor Latanya Sweeney has found "significant discrimination" in ad results depending on whether the name you're Googling is, statistically speaking, more likely to belong to a white person or a black person. So while Googling an Emma will probably trigger nothing more sinister than an invitation to look up Emma's phone number and address, searching for a Jermaine could generate an ad for a criminal record search. In fact, Sweeney's research suggests that it's 25% more likely you'll get ads for criminal record searches from "black-identifying" names than white-sounding ones.

So what does this mean exactly? Does Google have some sort of racial profiling tool inlaid into its algorithms? Well, not exactly. Google has unequivocally stated that it "does not conduct any racial profiling" and the research paper itself admits that it's probably not as insidious as that. Rather it posits that the demographic discrepancies probably come from "smart" algorithms which adapt ad placement based on mass-user habits. In short, writes Sweeney, the results raise "questions as to whether Google's advertising technology exposes racial bias in society and how ad and search technology can develop to assure racial fairness".

Woah – did someone just claim that society is racially biased? Hold the front page. While the Harvard study makes some interesting points, the research is also a telling case of digital dualism – the idea that online and offline are separate and distinct realities. This may have been true decades ago when the internet was something you "dialled-up" in order to check AltaVista for deals on VCRs, but it is now woefully outdated. Most people now see the virtual world as simply a reflection of the real world. Indeed, a report published this year by the Government Office for Science proclaims that: "The UK is now a virtual environment as well as a real place."

The question of how (and, indeed, if) technology can rid itself of what Sweeney describes as "structural racism" has some interesting parallels to debates about language that have been taking place long before Google was a twinkle in Sergey Brin's eyes. Take, for example, the phrase I used earlier, "blacken your name". It's a fairly common idiom and you'd hardly call someone out for racism if they used it; nevertheless it is a laden term. Benjamin Zephaniah has a great poem called White Comedy, which addresses the politics of this sort of phraseology: "I waz whitemailed / By a white witch / Wid white magic / An white lies," the poem begins. You get the idea.

For centuries people have been attempting to rid language of its "structural racism" by inventing politically neutral dialects. Esperanto, created by the rather wonderfully named LL Zamenhof, has been the most successful of these efforts, designed to transcend nationality and foster peace, love, harmony, all that good stuff. It hasn't quite got there yet but it has managed to spawn tens of thousands of fluent speakers, as well around a thousand native speakers. It could be said that the technological equivalent of Esperanto is Value Sensitive Design (VSD), a belief that technology should be proactively influenced to take account of human values in the design process, rather than simply reacting to them after afterwards. While this seems like a good idea on the surface, it's a viper's nest of ethical questions when you dig deeper, throwing up a broader debate about the idea of universal values and cultural relativism.

But all this theory is, perhaps, a little highbrow and detracts from the most important point in Sweeney's research: your digital footprint has profound implications on your real life. As Descartes didn't quite say: "Googlito ergo sum" – I am on Google, therefore I am. And, if what you are on Google is a potential criminal, it is going to make your chances of getting a job somewhat harder. But getting rid of this bias isn't a matter of algorithms, it's a matter of changing attitudes. There is an interesting insight into this in the word "highbrow" itself: a term that comes from the 19th-century "science" of phrenology, which used the shapes of people's skulls to justify racism. In the 1820s-1840s, when phrenology was all the rage, employers often demanded a character reference from a local phrenologist to check whether you'd be a good employee or a potential criminal. Back in the day, then, your skull served as a sort of Google search. And we didn't progress as a society by changing our skulls, rather we changed what went into them.

Can Googling be racist?

Ads pegged to Google search results can be racially biased because of how certain names are associated with blacks or whites, according to a new study.

Harvard University professor Latanya Sweeney found "statistically significant discrimination" when comparing ads served with results from online searches made using names associated with blacks and those with whites.

The study contrasted online searches using names such as "Ebony" and "DeShawn," with those such as "Jill" and "Geoffrey."

Ads posted alongside search results for names likely to belong to blacks tended to suggest criminal activity with offers along the lines of background checks for arrests, according to the study.

Searches using white-sounding names prompted results with neutral ads, the Sweeney's research indicated.

The findings raise "questions as to whether Google's advertising technology exposes racial bias in society and how ad and search technology can develop to assure racial fairness," Sweeney said in a blog post.

Advertisers bid on terms, or key words, with high bidders getting their ads posted alongside corresponding search results. Google defends the process as race-neutral, saying outcomes are driven by decisions by advertisers.

The study dated last week was funded in part by the National Science Foundation and a grant from Google.

Study Finds Google Search Ads Are Racially Biased

The Google search page appears on a computer screen in Washington on August 30, 2010. Ads pegged to Google search results can be racially biased because of how certain names are associated with blacks or whites, according to a new study.

Ads pegged to Google search results can be racially biased because of how certain names are associated with blacks or whites, according to a new study.

Harvard University professor Latanya Sweeney found "statistically significant discrimination" when comparing ads served with results from online searches made using names associated with blacks and those with whites.

The study contrasted online searches using names such as "Ebony" and "DeShawn," with those such as "Jill" and "Geoffrey."

Ads posted alongside search results for names likely to belong to blacks tended to suggest criminal activity with offers along the lines of background checks for arrests, according to the study.

Searches using white-sounding names prompted results with neutral ads, the Sweeney's research indicated.

The findings raise "questions as to whether Google's advertising technology exposes racial bias in society and how ad and search technology can develop to assure racial fairness," Sweeney said in a blog post.

Advertisers bid on terms, or key words, with high bidders getting their ads posted alongside corresponding search results. Google defends the process as race-neutral, saying outcomes are driven by decisions by advertisers.

The study dated last week was funded in part by the National Science Foundation and a grant from Google.

Explore further Google reshuffles placement of online search ads

(c) 2013 AFP

Online search ads expose racial bias, study finds

February 6, 2013

'Arrest' Appears With Greater Frequency in Ads Featuring 'Black' Names

The delivery of Google ads has significant racial bias, according to a study by a Harvard University professor.

Professor Latanya Sweeney says names that are more often associated with African-American people are more apt to generate ads linked to criminal activity.

When searching on sites that host Google ads, “black identifying” first names are 25 percent more likely to generate ads for Instant Checkmate, a firm that offers criminal background checks, the study revealed.

“There is less than a 0.1 per cent probability that these data can be explained by chance,” the research paper reads. “Why is this discrimination occurring? Is this Instant Checkmate, Google, or society’s fault?”

Sweeney, who is black, discovered the racial bias issue when a Google search of her name resulted in an Instant Checkmate ad, dubbed: Latanya Sweeney Arrested?

When Sweeney entered “white” names such as Kristen Lindquist or Jill Foley, the Google ad results were more generic and the Instant Checkmate ad simply read: “Located… information found on Jill Foley.”

Google, in a statement to the BBC, said it does not conduct racial profiling.

“We also have an ‘anti’ and violence policy which states that we will not allow ads that advocate against an organisation, person or group of people,” the search engine giant said in the statement, adding companies placing ads with Google can specify the keywords they want to target.

“It is up to individual advertisers to decide which keywords they want to choose to trigger their ads.”

Sweeney studied 2,184 different first names, some “black” and some “white.”

“A greater percentage of Instant Checkmate ads having the word ‘arrest’ in ad text appeared for black-identifying first names than for white-identifying first names within professional and netizen subsets,” the paper reads. “Of the 2,184 names in the study, 599, harvested using professional designations, had Instant Checkmate ads on Reuters with 217 having black associated names, 136 (63 percent) of which received ads with the word “arrest” in ad text compared to only 178 (47 percent) of 382 white associated names.”

The names Darnell, Jermaine and DeShawn, for instance, had the highest percentages of ads with “arrest” appearing in the text. All, of course, are names typically associated with black men. Ads for the names Jill and Emma, names associated with white women, however, had the lowest percentage of “arrest” appearing in associated ads.

Sweeney said the study has raised more questions than answers, adding the ad placement bias could be the result of various factors. The bias could also lie with the individual advertiser or society in general.

Sweeny said while the issue warrants further study to determine its root cause, “the basic message presented in this writing does not change. There is discrimination in delivery of these ads.”

Google Ads Reveal Bias Against African-Americans, Harvard University Study Reveals

A Google search for a "racially associated name" is more likely to trigger advertisements suggesting the person has a criminal background, according to a study by a Harvard professor.

Latanya Sweeney, a professor of government and technology at Harvard University and a specialist in online privacy, found that queries for a "black identifying" name were more likely to trigger an advertisement suggesting an arrest record than names traditionally given to white babies.

The study involved searches for 2,184 racially associated names as determined by prior workplace discrimination studies. Sweeney focused her analysis on Google.com and a highly trafficked news website that displays the widely used Google AdWords advertisements.

Names often given to black babies, such as DeShawn, Darnell and Jermaine, generated ads suggesting an arrest record in 81 to 86 percent of the searches on one website and 92 to 95 percent on the other, Sweeney wrote.

By comparison, names "predominantly given to white babies," such as Geoffrey, Jill and Emma, tended to trigger ads with more neutral copy, such as "Looking for Emma Jones?"

Of the searches involving the primarily white names, advertisements containing the word "arrest" appeared in 23 to 29 percent of the searches on one site and a range of 0 to 60 percent on the other, the study said.

Sweeney wrote that the statistical difference could have an impact on job seekers. However, she said more work would need to be done in order to determine whether it is Google's algorithm, advertisers, or an inherent bias in society that explains her findings.

"There is discrimination in delivery of these ads," Sweeney concluded, though she said the study also "raises more questions than it answers."

SLIDESHOW: Google Doodles

Google AdWords determine which advertisements appear, based on keywords, advertiser bids and user behavior.

In a statement, Google said, "AdWords does not conduct any racial profiling. We also have a policy which states that we will not allow ads that advocate against an organization, person or group of people. It is up to individual advertisers to decide which keywords they want to choose to trigger their ads."

Google Ad Delivery Can Show 'Racial Bias,' Says Harvard Study

Web page results of ads that appeared on-screen when Harvard professor Latanya Sweeney typed her name in a google search. Ads featured services for arrest records. Sweeney conducted a study that concluded searches with "black sounding" names are more likely to get results with ads for arrests records and other negative information.

Latanya Sweeney, a professor of government at Harvard University, is a law-abiding citizen. So she was startled when a colleague showed her what happened when he ran her name through a Google search: an advertisement on the results page headlined ­“Latanya Sweeney, Arrested?”

That little display triggered a much larger research project in which Sweeney, a computer scientist and specialist in data privacy, concluded that Google searches of names more likely associated with black people often yielded advertisements for a criminal records search in that person’s name.

In a research paper recently submitted for publication, Sweeney ran more than 2,100 names of real people through Google searches. She found that names that sounded black were 25 percent more likely to trigger ads for criminal records than names that sounded white — even if, like Sweeney, the person had no criminal record.

Advertisement

Sweeney did not offer conclusions about exactly how this happens, or why, but said she planned further research to determine the causes.

Get Today's Headlines in your inbox: The day's top stories delivered every morning. Sign Up Thank you for signing up! Sign up for more newsletters here

But the frequency with which the ads are paired to black-sounding names, said Sweeney, has real consequences.

Related Links More about the study on Google

“You could be in competition for an award, a scholarship, a new job,” she said. “You could be in a position of trust, like a professor, a judge. Having ads that show up suggestive of arrest, may actually discount your ability to function.”

For her study, Sweeney compiled lists of traditionally “black” names, such as Travon, Rasheed, Ebony, and Tamika, as well as “white” names such as Brad, Cody, Amy, and Jill.

The ads show up both on searches done on Google’s home page and on other websites that have built-in search functions and allow ads from Google to appear alongside the results. In all cases, Sweeney found the ads were from the same firm: Instant Checkmate LLC, a Las Vegas company that provides online background checks.

Advertisement

Instant Checkmate did not respond to repeated phone calls and e-mails seeking comment.

Google, meanwhile, issued a statement denying its AdWords business discriminates. AdWords is Google’s highly profitable service in which businesses pay to have their ads appear in the results when users search particular keywords or phrases.

“AdWords does not conduct any racial profiling,” said Google, adding the company’s policies prohibit advertisements “that advocate against an organization, person or group of people. It is up to individual advertisers to decide which keywords they want to choose to trigger their ads.”

Sweeney, a former professor at Carnegie Mellon University in Pittsburgh, did her undergraduate work at Harvard and was the first black woman to earn a doctorate in computer science from MIT. She founded Harvard’s Data Privacy Lab, which studies ways to share personal information over computer networks without compromising privacy.

For her study, Sweeney received funding from Google.

Advertisement

Sweeney said executives at Instant Checkmate told her they had bought search results from Google on the names of 100 million Americans. When one of these names is searched, Google displays an ad for Instant Checkmate, and gets a small fee if the searcher clicks on its ad. The more clicks an ad receives from searchers, the more likely it will appear on the page for that search term.

Not every search of the same name yields the same result; sometimes the advertisement from Instant Checkmate is neutral, simply offering to do a background check on the person whose name is searched. Other times, the ads from Instant Checkmate were more explicit, offering to provide an arrest record or criminal history.

Sweeney’s results dovetail somewhat with other research on “black” names, most notably a 2004 study that found employers were less likely to respond to resumes sent by people with black-sounding names.

For her research, Sweeney compiled a list of names from the 2004 study, and from a chapter in the book “Freakonomics” on distinctively black names. She then identified 2,184 people with either distinctively white or black names and confirmed the race of about 1,400 of them by looking up their photos in Google’s image database.

She found that first names were reliable predictors of a person’s race. Someone named Brad was almost always white, while someone named DeAndre was nearly always black.

Sweeney ran the names though Internet searches in two places — the main Google website, and the news site Reuters.com, which uses Google to s

Harvard professor spots Web search bias

Every job candidate lives in fear that a Google search could reveal incriminating indiscretions from a distant past. But a new study examining racial bias in the wording of online ads suggests that Google's advertising algorithms may be unfairly associating some individuals with wrongdoing they didn’t commit.

After learning that a Google search for her own name surfaced an ad for a background check service hinting that she’d been arrested, Harvard University professor Latanya Sweeney set out to investigate whether race shaped online ad results. She searched over 2,000 “racially associated names” to determine if names "previously identified by others as being assigned at birth to more black or white babies" turned up ad results that indicated a criminal record. Specifically, she focused on ads purchased by companies that provide background checks used by employers.

Sweeney concluded that so-called black-identifying names were significantly more likely to be accompanied by text suggesting that person had an arrest record, regardless of whether a criminal record existed or not.

On the left, ads delivered for full name searches on Google.com, according to Sweeney's study. On the right, ads delivered for full name searches done via Reuters.com, which offers Google results. Via Latanya Sweeney.

“There is discrimination in delivery of these ads,” Sweeney writes in her report. “Notice that racism can result, even if not intentional, and that online activity may be so ubiquitous and intimately entwined with technology design that technologists may now have to think about societal consequences like structural racism in the technology they design.”

As Sweeney notes, ads linking a person’s name with criminal activity risk harming his or her reputation by suggesting wrongdoing when there is none. She asks readers to imagine that they’re being evaluated by a potential employer, who’s told to read up on their arrests when he or she searchers their name. “Worse,” writes Sweeney, “The ads don't appear for your competitors."

To test her hypothesis, Sweeney used existing research to find names considered either black- or white-identifying, then used those to compile more than 2,000 first and last name combinations belonging to real people. She queried those full names on Google.com and Reuters.com, both of which rely on Google’s AdSense for online ad delivery, and recorded the language of the sponsored posts that appeared. Her own name, for example, included an ad from InstantCheckmate.com that read, “Latanya Sweeney, Arrested?” and “Check Latanya Sweeney’s Arrests.”

On Reuters.com, a “black-identifying name was 25 percent more likely to get an ad suggestive of an arrest record,” Sweeney found. On Google, 92 percent of ads appearing next to black-identifying names suggested a criminal record, compared to 80 percent of white-identifying names. In fact, white individuals accounted for nearly seventy percent of all arrests and black individuals 28.4 percent of arrests, according to FBI crime statistics from 2011, the most recent year data is available.

Sweeney offers several potential explanations for the discriminatory ad copy. It may be that Instant Checkmate, which had the most online ads of any company tracked in the study, chose to link black-identifying names with ad templates suggesting a criminal record. However, Sweeney notes Instant Checkmate told her that it gave Google the same ad text to run with groups of last names, and did not vary ad templates according to first names.

In response to the study, a spokeswoman for Instant Checkmate said the company "would like to state unequivocally that it has never engaged in racial profiling in Google AdWords."

"We have absolutely no technology in place to even connect a name with a race and have never made any attempt to do so. The very idea is contrary to our company's most deeply held principles and values," the spokeswoman wrote in an email to HuffPost.

Google could be at fault, writes Sweeney, though a Google spokesman told The Huffington Post that the company does not target its users based on race, noting that advertisers are free to choose the terms against which their ads will appear.

“AdWords does not conduct any racial profiling,” the spokesman wrote in an email. “We also have an ‘anti’ and violence policy which states that we will not allow ads that advocate against an organization, person or group of people. It is up to individual advertisers to decide which keywords they want to choose to trigger their ads.”

It might also be that Google users are to blame: when an advertiser first chooses ad copy, all options are equally weighted and have an equal probability of being shown in the search results. Yet over time, as certain templates are clicked more frequently than others, Google will attempt to optimize its customer’s ad by more frequently showing the ad that garners the most clicks.

“Did Instant Checkmate provide ad templates suggestive of arrest disproportionate

Google's Online Ad Results Guilty Of Racial Profiling, According To New Study

A lovely little piece of research that shows that the ads served up alongside Google searches could, if you were that way inclined, be seen as somewhat racist:

A recent study of Google searches by Professor Latanya Sweeney has found "significant discrimination" in ad results depending on whether the name you're Googling is, statistically speaking, more likely to belong to a white person or a black person. So while Googling an Emma will probably trigger nothing more sinister than an invitation to look up Emma's phone number and address, searching for a Jermaine could generate an ad for a criminal record search. In fact, Sweeney's research suggests that it's 25% more likely you'll get ads for criminal record searches from "black-identifying" names than white-sounding ones.

Which brings us to something of a problem. A problem that we see in other areas too. For example, a truly free market and also an oligopolistic one will show much the same activity in prices. They'll all move at the same time, whether up or down. So, we cannot tell whether a market is a free one or an oligopolistic one purely by noting that prices move all at the same time.

Here we've a very similar problem. If we postulate that Google, or the people (or even the algorithm) that place the ads are racist then this is the sort of result we'd expect to see. However, if it turned out that it was us users, in the way we trained the algorithm through clicking on ads previously, were racist then we'd expect to see exactly the same results. So, we cannot, simply by noting the ad placements work out whether it's us or Google that's displaying the racially partial attitudes.

And there's a third explanation too: that it's not any reflection of attitudes at all. Rather, that it's just a reflection of the world we do in fact live in. It's well known that the prison population is disproportionately black. Disproportionate here meaning wildly, grossly, skewed. Quite why is something we can all have a shouting match about but contributing factors are that blacks are on average poorer than other racial groupings: crime and poverty are indeed connected and always have been. There are certain crimes which are disproportionately punished: for example crack is more a poor and black drug than cocaine is yet the punishments for crack are hugely higher than those for cocaine possession or dealing. And there's most certainly room in there for the idea that there is indeed racism in the society at large.

But given that those things do exist one would expect that services to check for criminal records will indeed be used more often with black sounding names than with white or Asian such. That is, that whether racism in the wider society led to it or not, the placement of the ads is entirely rational given the society we're in.

All of which is really rather sad to my mind. Google's not racist for associating criminal records checks with black sounding names. Nor is the wider society so. For they're both entirely rational and unprejudiced actions. But that they are both entirely rational and unprejudiced actions is something of a blot on society's escutcheon.

Is Google Racist Or Is It The Rest Of Us?

Is Google’s search algorithm guilty of racism? A study by a Harvard researcher found that it could be.

Professor Latanya Sweeney says she found "statistically significant discrimination" when comparing ads served with results from online searches made using names associated with blacks and those associated with whites. Sweeney, who is African-American, began her study after learning that a Google search for her own name produced an ad for a background check service, hinting that she'd been arrested.

Her hypothesis: Names that are associated with African-Americans, such as Latanya, are more likely to trigger negative ad associations than names such as Jill, that aren't.

I haven’t spoken to Sweeney (her picture is on the left), but I don’t believe she is accusing Google of deliberate racism, and neither am I. It would be easy for some to dismiss her work as torturous political correctness, but that’s wrong too. What is important about her work, I believe, is that it gives some insight into the experience that African Americans may have on the Internet.

After all, doing a search on your own name (I bet you've done that) and being served an ad for a service called Instantcheckmate that implies you are a criminal, has got to feel demeaning. When Sweeney searched on her name, Instantcheckmate ads saying “Latanya Sweeney, Arrested?” and “Check Latanya Sweeney’s Arrests,” appeared in the paid results part of the search page.

Suppose her daughter had seen that?

When Sweeney then clicked on those ads and paid the fee, it turned out, not surprisingly, that there was no record of her being arrested. By way of contrast, she did search for the names "Kristen Haring", "Kristen Sparrow" and "Kristen Lindquist." Ads came up, but not from Instantcheckmate or other similar services. But when she searched on those names in Instantcheckmate, there were arrest records for two of those women in the company database.

On Reuters.com, which uses Google AdSense to serve ads, a "black-identifying name was 25 percent more likely to get an ad suggestive of an arrest record," Sweeney found. On Google, 92 percent of ads appearing next to black-identifying names suggested a criminal record, compared to 80 percent of white-identifying names, she wrote.

It's not clear what's at the bottom of this. Google, of course, denies that it engages in what you'd have to call racial profiling. It may be that Instant Checkmate, which had the most online ads of any company tracked in the study, chose to link black-identifying names with ad templates suggesting a criminal record, though the company told Sweeney that it doesn't do that.

"There is discrimination in delivery of these ads," Sweeney writes in her report. "Notice that racism can result, even if not intentional, and that online activity may be so ubiquitous and intimately entwined with technology design that technologists may now have to think about societal consequences like structural racism in the technology they design."

I suspect that what may be going on here has to do with the type of searches millions of people make every day. Google's algorithm does track searches and uses that information to make search more relevant. If enough people are searching on black-sounding names and terms like crime, that might explain this.

Ultimately, I bet we'll never find out, but it's worth thinking about the ways the Web affects all of us in some very unexpected ways.

Are Google Ad Results Guilty of Racial Profiling?

UPDATED: February 20, 2013, at 10:35 a.m.

A Harvard researcher has found that typically African-American names are more likely to be linked to a criminal record in Google-generated advertisements on the online search engine and on the news site Reuters.com, a website to which Google supplies advertisements.

Latanya Sweeney, director of the Data Privacy Lab at Harvard, began her research after a colleague, government department fellow Adam Tanner, told her that his Google search of her name had generated an advertisement that read: “Latanya Sweeney: arrested.”

In disbelief, Sweeney began poking around online and found that advertisements from criminal records site InstantCheckmate.com incorrectly suggested that she had an arrest history. The two then plugged Tanner’s name into Google and, to the pair’s surprise, it generated a neutral InstantCheckmate ad that did not hint at a criminal record.

“Adam jumped to the conclusion that [the advertisements] were coming up on Black-sounding names,” said Sweeney. “I spent hours trying to show him that he was wrong and couldn’t.”

Advertisement

Thus began the start of a research study, which was partially funded by Google, that involved extensive combing of the Internet and numerous databases. Sweeney’s research paper summarizing her findings is slated for publication in an academic journal.

First, Sweeney identified typically African-American names using a database that compiled first names given disproportionately to babies of one racial identity over another. She then paired the first names with last names by identifying real professionals with academic qualifications—medical doctors, for example—and verified their racial identities with Google Image search results. Finally, using a sample of typically Caucasian and typically African American names, she ran analysis on search results.

According to Sweeney, Google maintains that it cannot predict which advertisements—positive or negative—will be most popular, so advertisements are initially distributed at random. However, in searches of typically African-American names on Google and Reuters.com, Sweeney found between 81 to 95 percent of generated ads suggested an arrest.

“It is an interesting mirror of society,” said Sweeney, “that the Internet, which started out neutral, has begun to show racial bias.”

When asked for comment, Google spokesperson Aaron J. Stein wrote in an email that AdWords, Google's profitable advertising product, does not engage in racial profiling.

“We also have a policy which states that we will not allow ads that advocate against an organization, person or group of people,” Stein wrote. “It is up to individual advertisers to decide which keywords they want to choose to trigger their ads.”

Reuters.com could not immediately be reached for comment on Sweeney’s study Wednesday evening.

Sweeney said there are two possible reasons for the seemingly biased results: Google’s computer-generated algorithm may be unintentionally skewed, or Google users may choose to more frequently click on arrest advertisements that come up for black names over white names.

Gary King, director of the Institute for Qualitative Social Science at Harvard, wrote in an email that isolating the cause of the skewed search results will determine the next steps for researchers.

“Laying out the patterns, as Latanya is doing, and then ascertaining its causes and effects, is very important,” King wrote.

—Staff writer Anneli L. Tostar can be reached at annelitostar@college.harvard.edu, Follow her on Twitter at @annelitostar.

Harvard Researcher: Google-Generated Ads Show Racial Bias

The January/February 2019 issue of acmqueue is out now

Subscribers and ACM Professional members login here

PDF

April 2, 2013

Volume 11, issue 3

Discrimination in Online Ad Delivery

Google ads, black names and white names, racial discrimination, and click advertising

Latanya Sweeney

Do online ads suggestive of arrest records appear more often with searches of black-sounding names than white-sounding names? What is a black-sounding name or white-sounding name, anyway? How many more times would an ad have to appear adversely affecting one racial group for it to be considered discrimination? Is online activity so ubiquitous that computer scientists have to think about societal consequences such as structural racism in technology design? If so, how is this technology to be built? Let's take a scientific dive into online ad delivery to find answers.

"Have you ever been arrested?" Imagine this question appearing whenever someone enters your name in a search engine. Perhaps you are in competition for an award, a scholarship, an appointment, a promotion, or a new job, or maybe you are in a position of trust, such as a professor, a physician, a banker, a judge, a manager, or a volunteer. Perhaps you are completing a rental application, selling goods, applying for a loan, joining a social club, making new friends, dating, or engaged in any one of hundreds of circumstances for which someone wants to learn more about you online. Appearing alongside your list of accomplishments is an advertisement implying you may have a criminal record, whether you actually have one or not. Worse, the ads may not appear for your competitors.

Job applications frequently include questions such as: Have you ever been arrested? Have you ever been charged with a crime? Other than a traffic ticket, have you been convicted of a crime? Employers ask these questions to establish trustworthiness. Because others often equate a criminal record with not being reliable or honest, protections exist for those having criminal records.

If an employer disqualifies a job applicant based solely upon information indicating an arrest record, the company may face legal consequences. The U.S. EEOC (Equal Employment Opportunity Commission) is the federal agency charged with enforcing Title VII of the Civil Rights Act of 1964, a law that applies to most employers, prohibiting employment discrimination based on race, color, religion, sex, or national origin. Guidance issued in 1973 extended protections to people with criminal records.5,11 Title VII does not prohibit employers from obtaining criminal background information. Certain uses of criminal information, however, such as a blanket policy or practice of excluding applicants or disqualifying employees based solely upon information indicating an arrest record, can result in a charge of discrimination.

To make a determination, the EEOC uses an adverse impact test that measures whether certain practices, intentional or not, have a disproportionate effect on a group of people whose defining characteristics are covered by Title VII. To decide, you calculate the percentage of people affected in each group and then divide the smaller value by the larger to get the ratio and compare the result to 80. For example, suppose a company laid off comparable black and white workers at the same rate—25 percent of blacks and 25 percent of whites—then the ratio, 25 divided by 25, would be 100 percent. If the ratio is less than 80 percent, then the EEOC considers the effect disproportionate and may hold the employer responsible for discrimination.6

What about online ads suggesting someone with your name has an arrest record, even when no one with your name has been arrested? Title VII does not apply unless you have an arrest record and can prove the potential employer routinely uses ads or information from the company sponsoring the ads, and the result has an inappropriate chilling effect on hiring applicants with criminal records.

The advertiser may argue the ads are commercial free speech—a constitutional right to display the ad associated with your name. The First Amendment of the U.S. Constitution protects advertising. In a landmark decision, the U.S. Supreme Court set out a test for assessing government restrictions on commercial speech, which begins by determining whether the speech is misleading.3 Are online ads suggesting the existence of an arrest record misleading if no one by that name has an arrest record?

Assume the ads are free speech: what happens when these ads appear more often for one racial group than another? Not everyone is being equally affected by the free speech. Is that free speech or racial discrimination?

Racism, as defined by the U.S. Commission on Civil Rights, is "any attitude, action, or institutional structure which subordinates a person or group because of their color . . . Racism is not just a matter of attitudes; actions and institutional structures can also be a form of racism."16 Racial discri

Discrimination in Online Ad Delivery

Latanya Sweeney, Racial Discrimination in Online Ad Delivery

Comment by: Margaret Hu

PLSC 2013

Published version available here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2208240

Workshop draft abstract:

Investigating the appearance of online advertisements that imply the existence of an arrest record, this writing chronicles field experiments that measure racial discrimination in ads served by Google AdSense. A specific company, instantcheckmate.com, sells aggregated public information about individuals in the United States and sponsors ads to appear with Google search results for searches of some exact “firstname lastname” queries. A Google search for a person’s name, such as “Trevon Jones”, may yield a personalized ad that may be neutral, such as “Looking for Trevon Jones? Comprehensive Background Report and More…”, or may be suggestive of an arrest record (Suggestive ad), such as “Trevon Jones, Arrested?…” or “Trevon Jones: Truth. Arrests and much more. … “

Field experiments documented in this writing show racial discrimination in ad delivery based on searches of 2200 personal names across two websites. First names, documented by others as being assigned primarily to black babies, such as Tyrone, Darnell, Ebony and Latisha, generated ads suggestive of an arrest 75 percent and 96 percent of the time, and names having a first name documented by others as being assigned at birth primarily to whites, such as Geoffrey, Brett, Kristen and Anne, generated more neutral copy: the word “arrest” appeared zero to 9 percent of the time. A few names did not follow these patterns: Brad, a name predominantly given to white babies, generated a Suggestive ad 62 percent to 65 percent of the time. All ads return results for actual individuals and Suggestive ads appear regardless of whether the subjects have an arrest record in the company’s database. Notwithstanding these findings, the company maintains Google received the same ad copy for groups of last names (not first names), raising questions as to whether Google’s algorithm exposes racial bias in society.

Latanya Sweeney, Racial Discrimination in Online Ad Delivery

Listen now:

Season 4, Episode 2

When Harvard professor Latanya Sweeney Googled her name one day, she noticed something strange: an ad for a background check website came up in the results, with the heading: “Latanya Sweeney, Arrested?” But she had never been arrested, and neither had the only other Latanya Sweeney in the U.S. So why did the ad suggest so? Thousands of Google searches later, Sweeney discovered that Googling traditionally black names is more likely to produce an ad suggestive of a criminal background. Why? In this episode of Freakonomics Radio, Stephen Dubner investigates the latest research on names. Steve Levitt talks about his groundbreaking research on names, economic status, and race. And University of Chicago economist Eric Oliver explains why a baby named “Cody” is more likely to belong to conservative parents, and why another named “Esme” was probably born to a pair of liberals.

How Much Does Your Name Matter?

That Google and other companies track our movements around the Web to target us with ads is well known. How exactly that information gets used is not—but a research paper presented last week suggests that some of the algorithmic judgments that emerge from Google’s ad system could strike many people as unsavory.

Researchers from Carnegie Mellon University and the International Computer Science Institute built a tool called AdFisher to probe the targeting of ads served up by Google on third-party websites. They found that fake Web users believed by Google to be male job seekers were much more likely than equivalent female job seekers to be shown a pair of ads for high-paying executive jobs when they later visited a news website.

AdFisher also showed that a Google transparency tool called “ads settings,” which lets you view and edit the “interests” the company has inferred for you, does not always reflect potentially sensitive information being used to target you. Browsing sites aimed at people with substance abuse problems, for example, triggered a rash of ads for rehab programs, but there was no change to Google’s transparency page.

What exactly caused those specific patterns is unclear, because Google’s ad-serving system is very complex. Google uses its data to target ads, but ad buyers can make some decisions about demographics of interest and can also use their own data sources on people’s online activity to do additional targeting for certain kinds of ads. Nor do the examples breach any specific privacy rules—although Google policy forbids targeting on the basis of “health conditions.” Still, says Anupam Datta, an associate professor at Carnegie Mellon University who helped develop AdFisher, they show the need for tools that uncover how online ad companies differentiate between people.

“I think our findings suggest that there are parts of the ad ecosystem where kinds of discrimination are beginning to emerge and there is a lack of transparency,” says Datta. “This is concerning from a societal standpoint.” Ad systems like Google’s influence the information people are exposed to and potentially even the decisions they make, so understanding how those systems use data about us is important, he says.

Even companies that run online ad networks don’t have a good idea of what inferences their systems draw about people and how those inferences are used, says Datta. His group has begun collaborating with Microsoft to develop a version of AdFisher for use inside the company, to look for potentially worrying patterns in the ad targeting on the Bing search engine. A paper by Datta and two colleagues—Michael Tschantz, of the International Computer Science Institute, and Amit Datta, also at Carnegie Mellon—was presented at the Privacy Enhancing Technologies Symposium in Philadelphia last Thursday.

Google did not officially respond when the researchers contacted the company about their findings late last year, they say. However, this June the team noticed that Google had added a disclaimer to its ad settings page. The interest categories shown are now said to control only “some of the Google ads that you see,” and not those where third parties have made use of their own data. Datta says that greatly limits the usefulness of Google’s transparency tool, which could probably be made to reveal such information if the company chose. “They are serving these ads, and if they wanted to they could reflect these interests,” he says.

“Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed,” said Andrea Faville, a Google spokeswoman, in an e-mail. “We provide transparency to users with ‘Why This Ad’ notices and Ads Settings, as well as the ability to opt out of interest-based ads.” Google is looking at the methodology of the study to try to understand its findings.

The AdFisher tool works by sending out hundreds or thousands of automated Web browsers on carefully chosen trails across the Web in such a way that an ad-targeting network will infer certain interests or activities. The software then records which ads are shown when each automated browser visits a news website that uses Google’s ad network, as well as any changes to the ad settings page. In some experiments that page is edited to look for differences between the ways ads are targeted to, say, males and females. AdFisher automatically flags any statistically significant differences in how ads are targeted using the particular interest categories or demographics it is investigating.

Roxana Geambasu, an assistant professor at Columbia University, says there’s considerable value in the way AdFisher can statistically extract patterns from the complexity of targeted ads. A tool called XRay, which her own research group released last year, can reverse-engineer the connection between ads shown to Gmail users and keywords in their messages. For example, ads for low-requirement car loans might be targeted to those using words associated with financial difficulties.

However, Geambasu says that the results from both XRay and AdFisher are still only suggestive. “You can’t draw big conclusions, because we haven’t studied this very much and these examples could be rare exceptions,” she says. “What we need now is infrastructure and tools to study these systems at much larger scale.” Being able to watch how algorithms target and track people to do things like serve ads or tweak the price of insurance and other products is likely to be vital if civil rights groups and regulators are to keep pace with developments in how companies use data, she says.

A White House report on the impact of “big data” last year came to similar conclusions. “Data analytics have the potential to eclipse longstanding civil rights protections in how personal information is used in housing, credit, employment, health, education, and the marketplace,” it said.

Probing the Dark Side of Google’s Ad-Targeting System

Female job seekers are much less likely to be shown adverts on Google for highly paid jobs than men, researchers have found.

The team of researchers from Carnegie Mellon built an automated testing rig called AdFisher that pretended to be a series of male and female job seekers. Their 17,370 fake profiles only visited jobseeker sites and were shown 600,000 adverts which the team tracked and analysed.

The authors of the study wrote: “In particular, we found that males were shown ads encouraging the seeking of coaching services for high paying jobs more than females.”

One experiment showed that Google displayed adverts for a career coaching service for “$200k+” executive jobs 1,852 times to the male group and only 318 times to the female group. Another experiment, in July 2014, showed a similar trend but was not statistically significant.

Google’s ad targeting system is complex, taking into account various factors of personal information, browsing history and internet activity. Critically the fake users started with completely fresh profiles and behaved in the same way, with gender being the only factor that was different and illustrating that the ad targeting for these job adverts was discriminatory.

Discrimination is inherent to advertising

However, the authors of the study admit that the gender discrimination shown is difficult to pin to one factor, due to the complexity of not only Google’s profiling systems, but also of the way advertisers buy and target their adverts using Google.

Advertisement

A Google spokeswoman said: “Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed.”

Profiling is inherently discriminatory, as it attempts to treat people differently based on their behaviour and personal information. While that customisation can be useful, showing more relevant ads to users, it can also have negative connotations.

The study authors said: “Male candidates getting more encouragement to seek coaching services for high-paying jobs could further the current gender pay gap. Even if this decision was made solely for economic reasons, it would continue to be discrimination.”

Google allows users to opt out of behavioural advertising and provides a system to see why users were shown ads and to customise their ad settings. But the study suggests that there is a transparency and overt discrimination issue in the wider advertising landscape.

Television, radio and print advertisers have, of course, been practising discrimination for years, pushing ads out with shows or magazines that appeal to a particular gender or demographic.

The difference now is that it is much more obvious in the internet age, and the in-depth profiling that is now possible could make it worse, not better.

Profiling, ad choice, dating and substance abuse

The researchers also investigated whether visiting sites dealing with certain topics, specifically substance abuse, adult content, disabilities, mental disorders and infertility, affected the ads served to the fake profiles.

Only visiting sites dealing with substance abuse and disability created statistically significant results. The researchers found that after visiting substance abuse sites Google’s advert profile page showed no change to the interests listed, but the adverts shown to the user accounts did change, including displaying ads for rehabilitation services from a company called Watershed. The adverts shown to the control group did not include any rehabilitation services.

“One possible reason why Google served Watershed’s ads could be remarketing, a marketing strategy that encourages users to return to previously visited website,” said the authors of the study.

The Watershed site was included in the top 100 substance abuse sites list, which was used as the experimental list of sites to visit by the automated system.

A similar result was shown in testing for disability sites, using a similar methodology. This time the researchers found that Google’s ad interest profile did change for the test group, but that it showed other interests not related to disability.

Ads for mobility devices including a standing wheelchair were shown to the test group 1,076 times but never to the control group. Again the adverts included sites within the top 100 sites concerning disability used during the experiment.

Google has said that it prohibits the targeting of adverts within its “sensitive category policy”, which includes health issues such as substance abuse. It also says that does not allow remarketing within the same sensitive areas.

The researchers also discovered that Google’s ad choices, which allows users to manually remove certain interests from the tracking profiles, had the effect that was desired.

“The ad settings appear to actually give users the ability to avoid ads they

might dislike or find embarrassing,” said the authors.

Removing online dating interests, for instance, stopped online dating ads from appearing within the top five ads served to the test group.

Women less likely to be shown ads for high-paid jobs on Google, study shows

When Timnit Gebru attended a prestigious AI research conference last year, she counted 6 black people in the audience out of an estimated 8,500. And only one black woman: herself.

As a PhD candidate at Stanford University who has published a number of notable papers in the field of artificial intelligence, Gebru finds the lack of diversity in the industry to be “extremely alarming” and effectively an “international emergency.” “People openly acknowledge that diversity is a priority,” she explains, “but they don’t treat the issue as urgent and actively address it.”

Gebru is hardly a stranger to adversity. Originally from Ethiopia, she arrived in the United States at the age of 16 and was immediately confronted with racial prejudices. Teachers expected her to fail exams because she was a foreigner. A guidance counselor nearly convinced her she couldn’t win acceptance to any universities, even her safety school. Through perseverance and resilience, Gebru debunked these inaccurate predictions and thrived in her new country, landing employment as an engineer for Apple and advanced technical degrees from Stanford.

AI researchers pride themselves on being rational and data-driven, but can be blind to issues such as racial or gender bias that aren’t always easy to capture with numbers. Homogenous thinking in the AI industry has implications far beyond the racial makeup of PhD programs and AI conference attendees. Gebru points out that AI powers high-stakes systems used to identify terrorists or predict criminal recidivism. Biases and oversights even bleed into the everyday technology we rely on.

These ongoing challenges are no surprise to Latanya Sweeney, the first black woman to receive a PhD in computer science from MIT. Currently a professor at Harvard and director of their Data Privacy Lab, Sweeney’s research examines technological solutions to societal, political and governance challenges. One of her important contributions illuminates discrimination in online advertising, where she discovered internet searches of names “racially associated” with the black community are 25% more likely to yield sponsored ads suggesting that the person has a criminal record, regardless of the truth. When Sweeney googles her own name, she encounters ads such as: “Latanya Sweeney, Arrested? 1) Enter name and state 2) Access full background. Checks instantly. www.instantcheckmate.com.”

Recently, Sweeney, who is also Editor-In-Chief of Technology Science, reported that SAT test prep services charge zip codes with high proportions of Asian residents nearly double the average price, regardless of their actual income. “In the United States, price discrimination is illegal if based on race, religion, nationality, or gender,” her report states, but the enforcement of the law is challenging in online commerce where differential pricing is wrapped up in opaque algorithms.

Biases of creators trickle down to their creations. Due to the exponential impact of technology, prioritizing diversity in AI is “even more important than in other fields,” cautions Gebru. To encourage networking and visibility, Gebru co-founded the social community Black In AI. The organization is on track to dramatically increase the participation of black researchers at notable AI conferences. She also returned to Ethiopia to co-teach a programming course called AddisCoder to a diverse range of children. Half of the students were female and all were from public schools. Some of them didn’t even know how to type when they started the class.

Yet, the transformation was extraordinary. One of the students came from a family with financial hurdles that forced him to leave school, but successfully won admittance to Harvard, MIT, and Columbia after completing the AddisCoder program.

Despite inclusion programs and advocacy groups, many challenges to diversity remain. The first is the apolitical nature of the AI industry, which often prefers the ivory towers of academia. “Einstein was an activist and anti-segregationist,” Gebru remembers. “He taught at black schools and likened the racial discrimination in the US to what was happening in Nazi Germany. But most AI researchers today look down on politicians and don’t want to get involved.” As AI is increasingly used to affect outcomes of elections and identify terrorists and criminals, she cautions that “AI researchers should not be silent regarding the repercussions of their work.”

The current anti-immigration sentiment does not help either. Rana el Kaliouby, an Egyptian-Muslim entrepreneur, completed a PhD at Cambridge University and post-doc work at MIT. She commercialized her research in emotional artificial intelligence into the company Affectiva, which has raised over $30 million in funding. “I woke up to the news about [Trump’s] immigration [order] and had this empty feeling in my stomach,” she shares in a heartfelt story for Inc, adding that “this melting pot of experiences, interests, educations, backgrounds and culture

Fighting Algorithmic Bias & Homogenous Thinking in AI

It seems like everyone is talking about the power of big data and how it is helping companies, governments, and organizations make better and more efficient decisions. But rarely do they mention that big data can actually perpetuate and exacerbate existing systems of racism, discrimination, and inequality.

Big data is supposed to make life better. Companies like Netflix use it to recommend movies you might like to watch based on what you’ve previously streamed. There are also broader public applications, such as predicting (and thus more quickly responding to) outbreaks of disease based on online search patterns of symptoms.

The problem with big data is that its application and use is not impartial or unbiased. Harvard professor Latanya Sweeney, who also directs the university’s Data Privacy Lab, conducted a cross-country study of 120,000 Internet search ads and found repeated incidence of racial bias. Specifically, her study looked at Google adword buys made by companies that provide criminal background checks. At the time, the results of the study showed that when a search was performed on a name that was “racially associated” with the black community, the results were much more likely to be accompanied by an ad suggesting that the person had a criminal record—regardless of whether or not they did (see video below). This is just one of many research studies showing similar bias.

If an employer searched the name of a prospective hire, only to be confronted with ads suggesting that the person had a prior arrest, you can imagine how that could affect the applicant’s career prospects.

Can computers be racist? Big data, inequality, and discrimination

Google’s search algorithms expose racial discrimination, a new study by Harvard professor purports. It claims ads related to criminal records are more likely to pop up when "black-sounding names" are ‘googled’.

­Latanya Sweeney, Professor of Government and Technology in Residence at Harvard, had found out that Google searches involving "black-sounding names" are more than 25 percent likely to produce ads that imply that person has been arrested than “white-sounding names”.

What are “black- and white-sounding names”?

In her paper “Discrimination in Online Ad Delivery” (published January 28) Sweeney refers to the a Job Discrimination study that “used a correlation of names given to black and white babies in Massachusetts between 1974 and 1979.”

First, using those findings, she collected a list of more than 2,000 names that were suggestive of race.

Names such as Lakisha, DeAndre, Jermaine, Leroy and Darnell more often tend to suggest that the person was black, while names like Allison, Kristen, Greg or Jack were considered to be white-identifying names.

Sample face images on google.com retrieved for searches “latisha”. (Image from arxiv.org)

Sweeney has taken a look at the so-called public record ads like InstantCheckMate or PeopleSmart and some others. She compared search results on Google.com and Reuters.com.

It turned out that “black-sounding names” are more likely than “white-sounding” to trigger ads including the word "arrest".

Searching for names like Leroy, Jamal or Kenya yielded a greater percentage of ads with the word “arrested” in the ad’s text, while for Jack or Greg, for instance, neutral ads, with no criminal related text, popped up.

In Google search InstantCheckMate ads contained the word “arrested” in 92 per cent of returns (in 332 cases out of 366) when “black-sounding” names were looked for, 8 per cent of neutral ads popped up.

For comparison, on Reuters search InstantCheckMate ads suggested checking criminal records in 60 per cent of all black identifying names.

Sweeney tried her own name. A computer scientist with no criminal past, she discovered that prior to presenting her academic merits, she first of all was greeted by an ad that suggested checking if “Latanya Sweeney, arrested?”

Sweeney followed the link and paid a fee to discover there was no criminal record associated with that name.

"Perhaps you are in competition for an award, an appointment, a promotion, or a new job…” the scientist writes in her paper, giving a bunch of circumstances for which an online researcher seeks to learn more about a person.

“Appearing alongside your list of accomplishments is an advertisement implying you may have a criminal record whether you actually have one or not,” Sweeney concluded.

However, she hesitated to give a cause for the differences in ads, saying more information “about the inner workings of Google AdSense [Google's online ad tool]” is required.

Sweeney suggested that the search engines might be just a reflection of society’s prejudices and delivered ads are simply based on the most popular links previous users have clicked on.

"So the ad text getting the most clicks eventually displays more frequently," she explained.

Google AdSense has responded Sweeney’s findings saying that it does not conduct any racial profiling in its search software.

Google exposes racial discrimination in online ads delivery - study

Similar Incidents

By textual similarity


Did our AI mess up? Flag the unrelated incidents