Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 3775

Associated Incidents

Incident 64535 Report
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
Google Apologizes For Inaccurate Gemini Photos: Tried Avoiding ‘Traps’ Of AI Technology
forbes.com · 2024

Google apologized Friday for a tranche of historically inaccurate images generated on its Gemini AI image service, saying the feature “missed the mark” after widely circulated images sparked backlash from right-wing users and billionaire X owner Elon Musk.

Google acknowledged some images generated on Gemini were “inaccurate or even offensive” in a blog post Friday afternoon, saying the company had been trying to stave off “some of the traps” it had “seen in the past with image generation technology,” including examples of sexually explicit images, violent imagery or depictions of real people.

The tech giant temporarily paused the feature Thursday, amid an online fury over some Gemini-created images, with Musk arguing the images put on full display the company’s “insane racist, anti-civilizational programming,” and the New York Post reporting some images of Black vikings, a female pope and female NHL players were created in response to prompts that did not specify race or gender.

Google, which had previously admitted the errors stem from its efforts to combat bias and boost representation, said in its apology post that the company had hoped the feature would “work well for everyone,” noting its “users come from all over the world.”

Google stated its “tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” and that its AI image generator had become “way more cautious” than it had intended, with the generator “wrongly interpreting some very anodyne prompts as sensitive.”

“If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people,” Google stated, adding: “You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”

Key Backgound

Google unveiled Gemini, a feature formerly known as Bard, in December, calling the product its “most capable and general model yet,” featuring “state-of-the-art performance.” Google launched the image generator feature last month, putting it up against rivals including OpenAI’s DALL-E and Midjourney. Days into the release, however, some images posted online appeared to feature historically inaccurate depictions of prompts, including a Black woman among U.S. senators in the 1800s and a Black man in a German World War II-era military uniform, The Verge reported. Google said Thursday it would pause the feature, releasing a statement vowing to “improve these kinds of depictions immediately.”

Google found itself wrapped in another controversy Thursday afternoon when screenshots of a purported message to Gmail users suggested the widely used email service would be sunsetting—a shocking revelation that briefly sent the internet into a tailspin, even though it turned out to be a hoax. Google appeared to deny the development in a Thursday afternoon post, saying on the Gmail X account that “Gmail is here to stay.”

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd