Incident 645: Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Responded
Description: Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
Editor Notes: This incident ID archives various reports following Gemini's release. The documented cases, while distinct, suggest that further refinement in training may have been beneficial. The primary impact appears to be on Google's reputation.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Google developed an AI system deployed by Google and Gemini, which harmed General public.

Incident Stats

Incident ID
645
Report Count
35
Incident Date
2024-02-21
Editors
Daniel Atherton
Google apologizes for “missing the mark” after Gemini generated racially diverse Nazis
theverge.com · 2024
Adi Robertson post-incident response

Google has apologized for what it describes as “inaccuracies in some historical image generation depictions” with its Gemini AI tool, saying its attempts at creating a “wide range” of results missed the mark. The statement follows criticism…

Google suspends Gemini AI chatbot's ability to generate pictures of people
apnews.com · 2024

Google said Thursday it is temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for “inaccuracies” in historical depictions that it was creating.

Gemini users this week pos…

Google pauses Gemini’s ability to generate AI images of people after diversity errors
theverge.com · 2024
Tom Warren post-incident response

Google says it’s pausing the ability for its Gemini AI to generate images of people, after the tool was found to be generating inaccurate historical images. Gemini has been creating diverse images of the US Founding Fathers and Nazi-era Ger…

Google’s hidden AI diversity prompts lead to outcry over historically inaccurate images
arstechnica.com · 2024

Generations from Gemini AI from the prompt, "Paint me a historically accurate depiction of a medieval British king."

On Thursday morning, Google announced it was pausing its Gemini AI image-synthesis feature in response to criticism that th…

Google’s ‘Woke’ Image Generator Shows the Limitations of AI
wired.com · 2024

Google has admitted that its Gemini AI model “missed the mark” after a flurry of criticism about what many perceived as “anti-white bias.” Numerous users reported that the system was producing images of people of diverse ethnicities and gen…

Google's AI was making really historically inaccurate pictures and now it's on ice
qz.com · 2024
Britney Nguyen post-incident response

Google said Thursday that it was pausing its AI model’s ability to generate images of people after users pointed out that its Gemini AI was making historically inaccurate images of people, such as racially diverse Nazi-era German soldiers.

Google pauses Gemini AI image generator after it created inaccurate historical pictures
cnbc.com · 2024
Arjun Kharpal post-incident response

Google on Thursday announced it is pausing its Gemini artificial intelligence image generation feature after saying it offers "inaccuracies" in historical pictures.

Users on social media had been complaining that the AI tool generates image…

Google Pauses Gemini's Image Generation of People to Fix Historical Inaccuracies
pcmag.com · 2024
Kate Irwin post-incident response

UPDATE 2/22: Early Thursday morning, Google said it had disabled Gemini's ability to generate any images of people. A quick PCMag test of Gemini on a Mac using the Chrome browser today delivered the following message when Gemini was asked t…

Google Suspends AI Tool's Image Generation of People After It Created Historical 'Inaccuracies,' Including Racially Diverse WWII-Era Nazi Soldiers
variety.com · 2024
Todd Spangler post-incident response

After critics slammed Google‘s “woke” generative AI system, the company halted the ability of its Gemini tool to create images of people to fix what it acknowledged were “inaccuracies in some historical image generation depictions.”

Google’…

Google making changes after Gemini AI portrayed people of color inaccurately
nbcnews.com · 2024
Kat Tenbarge post-incident response

Google said Thursday that it would temporarily limit the ability to create images of people with its artificial-intelligence tool Gemini after it produced illustrations with historical inaccuracies.

The pause was announced in a post on X af…

Google Stops Gemini AI From Making Images Of People—After Musk Calls Service ‘Woke’
forbes.com · 2024
Zachary Folk post-incident response

Google announced Thursday that it will “pause” its Gemini image generator’s ability to create images of people, after the program was criticized for showing misleading images of people's races in historical contexts—leading billionaire Elon…

Google pauses ‘absurdly woke’ Gemini AI chatbot’s image tool after backlash over historically inaccurate pictures
nypost.com · 2024
Thomas Barrabi post-incident response

Google said Thursday it would “pause” its Gemini chatbot’s image generation tool after it was widely panned for creating “diverse” images that were not historically or factually accurate — such as black Vikings, female popes and Native Amer…

Gemini makes a mess of generating historic images and pushes Google into culture war, Google promises fix
indiatoday.in · 2024
Divya Bhati post-incident response

Update: Google has paused the image generation feature of Gemini AI after receiving multiple complaints regarding its historical inaccuracies. The company has issued a statement on the same saying, “We’re already working to address recent i…

Google pausing AI portrait tool after complaints of historical inaccuracies | CBC News
cbc.ca · 2024
Thomson Reuters post-incident response

Google is pausing an AI tool that creates images of people following inaccuracies in some historical depictions the model generated, the latest hiccup in the Alphabet-owned company's efforts to catch up with rivals OpenAI and Microsoft.

Goo…

Google pauses Gemini's AI image generation features to fix historical inaccuracies
readwrite.com · 2024
Rachael Davies post-incident response

Google has paused the ability to create pictures of people using Gemini’s AI image generation feature to fix some historical inaccuracies.

Some Gemini users have shared screenshots of apparent historical inaccuracies, with Gemini creating i…

Google pauses Gemini AI for racially inaccurate images of historical figures - UPI.com
upi.com · 2024
Doug Cunningham post-incident response

Feb. 22 (UPI) -- Google Thursday said it is pausing its Gemini AI image creation after it created inaccurate historical images.

The company said in a statement on X that they were "working to address recent issues with Gemini's image genera…

Google to pause Gemini AI model's image generation of people due to inaccuracies
reuters.com · 2024
Reuters post-incident response

Feb 22 (Reuters) - Google is pausing its AI tool that creates images of people following inaccuracies in some historical depictions generated by the model, the latest hiccup in the Alphabet-owned (GOOGL.O), opens new tab company's efforts t…

Google pulls the plug on Gemini AI image generation after being mocked for revisionist history
fastcompany.com · 2024
Associated Press post-incident response

Google said Thursday it's temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for "inaccuracies" in historical depictions that it was creating.

Gemini users this week post…

Gemini image generation got it wrong. We'll do better.
blog.google · 2024
Prabhakar Raghavan post-incident response

Three weeks ago, we launched a new image generation feature for the Gemini conversational app (formerly known as Bard), which included the ability to create images of people.

It’s clear that this feature missed the mark. Some of the images …

Google Pauses Gemini AI's Image Generator Over "Historical Inaccuracies"
ndtv.com · 2024
NDTV post-incident response

Google has temporarily stopped its Gemini Artificial Intelligence chatbot from generating images of people. This comes a day after the tech giant issued an apology for “inaccuracies” in historical depictions the chatbot was creating.

After …

Google Apologizes For Inaccurate Gemini Photos: Tried Avoiding ‘Traps’ Of AI Technology
forbes.com · 2024
Brian Bushard post-incident response

Google apologized Friday for a tranche of historically inaccurate images generated on its Gemini AI image service, saying the feature “missed the mark” after widely circulated images sparked backlash from right-wing users and billionaire X …

Google apologizes over Gemini image generation inaccuracies
siliconangle.com · 2024
Maria Deutscher post-incident response

Google LLC has apologized after its Gemini chatbot generated images that incorrectly depicted racially diverse characters in historical contexts. 

Prabhakar Raghavan, the company’s senior vice president of knowledge and information, acknowl…

Google Under Fire As Gemini Generates Inaccurate Images Of Historical Figures
nasdaq.com · 2024
RTTNews post-incident response

(RTTNews) - Alphabet Inc.'s (GOOG) Google issued an apology for its AI chatbot Gemini, generating images of historical figures with inaccurate racial and ethnic depictions, attributing the errors to its attempt to create diverse results.

Th…

Google admits its Gemini AI 'got it wrong' following widely panned image generator: Not 'what we intended'
foxbusiness.com · 2024
Joseph Wulfsohn post-incident response

Google is offering a mea culpa for its widely-panned artificial intelligence (AI) image generator that critics trashed for creating "woke" content, admitting it "missed the mark."

"Three weeks ago, we launched a new image generation feature…

Google Gemini Anti-Whiteness Disaster Is a Cautionary Tale About... Gaming?
thealgorithmicbridge.com · 2024

Google's Gemini disaster can be summarized with this sentence:

A non-problem with a terrible solution; a terrible problem with no solution.

By now I'm sure you all have seen the lynching Google has received online after Gemini users realize…

Now Google's 'absurdly woke' Gemini AI refuses to condemn pedophilia as wrong - after being blasted over 'diverse' but historically inaccurate images
dailymail.co.uk · 2024

Google Gemini, the company's 'absurdly woke' AI chatbot, is facing fresh controversy for refusing to condemn pedophilia.

It comes just a day after its image generator was blasted for replacing white historical figures with people of color.

Red-faced Google apologizes after woke AI bot gives 'appalling' answers about pedophilia, Stalin
foxnews.com · 2024
Gabriel Hays post-incident response

Google on Saturday admitted to Fox News Digital that a failure by its AI chatbot to outright condemn pedophilia is both "appalling and inappropriate" and a spokesperson vowed changes. 

This came in the wake of users noting that Google Gemin…

Google AI’s answer on whether Modi is ‘fascist’ sparks outrage in India, calls for tough laws
scmp.com · 2024

Tech giant Google is facing a barrage of criticism by followers of Indian Prime Minister Narendra Modi over responses by its Gemini artificial intelligence (AI) tool that have been perceived as “biased”, prompting calls for tough regulation…

Google's Gemini AI generator will relaunch in a 'few weeks' after spitting out inaccurate images
qz.com · 2024
William Gavin post-incident response

Google is poised to relaunch its artificial intelligence-powered image generator, Gemini, a few weeks after it suspended the service for historically inaccurate photos the company called “embarrassing and wrong.”

Users testing out the servi…

Google Pauses Gemini AI Over ‘Historical Inaccuracies’
hoit.uk · 2024
Mike Knight post-incident response

**'Woke' ... Overcorrecting For Diversity? **

An example of the inaccuracy issue (as highlighted by X user Patrick Ganley recently, after asking Google Gemini to generate images of the Founding Fathers of the US), was when it returned image…

Google Mired in Controversy Over AI Chatbot Push
wsj.com · 2024
Wall Street Journal post-incident response

Google’s artificial-intelligence push is turning into a reputational headache.

Gemini, a chatbot based on the company’s most advanced AI technology, angered users last week by producing ahistoric images and blocking requests for depictions …

Google CEO Sundar Pichai says Gemini chatbot’s ‘woke’ AI disaster ‘completely unacceptable’
nypost.com · 2024
Thomas Barrabi post-incident response

Google CEO Sundar Pichai blasted the Gemini AI chatbot’s widely panned habit of generating “woke” versions of historical figures as “completely unacceptable” in a scathing email to company employees.

Pichai said Google’s AI teams are “worki…

The AI Culture Wars Are Just Getting Started
wired.com · 2024

Google was forced to turn off the image-generation capabilities of its latest AI model, Gemini, last week after complaints that it defaulted to depicting women and people of color when asked to create images of historical figures that were …

After Google Gemini, Facebook parent Meta’s AI tool creates ‘historically inaccurate images
timesofindia.indiatimes.com · 2024
TOI Tech Desk post-incident response

Google Gemini and Meta's Imagine AI faced backlash for generating historically inaccurate images. The models showed people of Southeast Asia as American colonials and people of color as Founding Fathers. The issue highlighted bias and over …

‘We definitely messed up’: why did Google AI tool make offensive historical images?
theguardian.com · 2024
Dan Milmo, Alex Hern post-incident response

Google’s co-founder Sergey Brin has kept a low profile since quietly returning to work at the company. But the troubled launch of Google’s artificial intelligence model Gemini resulted in a rare public utterance recently: “We definitely mes…

Variants

A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.