Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer

Problème 3775

Incidents associés

Incident 64535 Rapports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
Google Apologizes For Inaccurate Gemini Photos: Tried Avoiding ‘Traps’ Of AI Technology
forbes.com · 2024

Google apologized Friday for a tranche of historically inaccurate images generated on its Gemini AI image service, saying the feature “missed the mark” after widely circulated images sparked backlash from right-wing users and billionaire X owner Elon Musk.

Google acknowledged some images generated on Gemini were “inaccurate or even offensive” in a blog post Friday afternoon, saying the company had been trying to stave off “some of the traps” it had “seen in the past with image generation technology,” including examples of sexually explicit images, violent imagery or depictions of real people.

The tech giant temporarily paused the feature Thursday, amid an online fury over some Gemini-created images, with Musk arguing the images put on full display the company’s “insane racist, anti-civilizational programming,” and the New York Post reporting some images of Black vikings, a female pope and female NHL players were created in response to prompts that did not specify race or gender.

Google, which had previously admitted the errors stem from its efforts to combat bias and boost representation, said in its apology post that the company had hoped the feature would “work well for everyone,” noting its “users come from all over the world.”

Google stated its “tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” and that its AI image generator had become “way more cautious” than it had intended, with the generator “wrongly interpreting some very anodyne prompts as sensitive.”

“If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people,” Google stated, adding: “You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”

Key Backgound

Google unveiled Gemini, a feature formerly known as Bard, in December, calling the product its “most capable and general model yet,” featuring “state-of-the-art performance.” Google launched the image generator feature last month, putting it up against rivals including OpenAI’s DALL-E and Midjourney. Days into the release, however, some images posted online appeared to feature historically inaccurate depictions of prompts, including a Black woman among U.S. senators in the 1800s and a Black man in a German World War II-era military uniform, The Verge reported. Google said Thursday it would pause the feature, releasing a statement vowing to “improve these kinds of depictions immediately.”

Google found itself wrapped in another controversy Thursday afternoon when screenshots of a purported message to Gmail users suggested the widely used email service would be sunsetting—a shocking revelation that briefly sent the internet into a tailspin, even though it turned out to be a hoax. Google appeared to deny the development in a Thursday afternoon post, saying on the Gmail X account that “Gmail is here to stay.”

Lire la source

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd