Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer

Problème 3723

Incidents associés

Incident 64535 Rapports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
Google CEO Sundar Pichai says Gemini chatbot’s ‘woke’ AI disaster ‘completely unacceptable’
nypost.com · 2024

Google CEO Sundar Pichai blasted the Gemini AI chatbot’s widely panned habit of generating “woke” versions of historical figures as “completely unacceptable” in a scathing email to company employees.

Pichai said Google’s AI teams are “working around the clock” to fix Gemini – which was disabled last week after the “absurdly woke” chatbot ignited a social media firestorm with bizarrely revisionist pictures such as Black Vikings, female popes, and even “diverse” Nazi-era German soldiers.

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai said in the Tuesday memo first obtained by Semafor.

The missive marked Pichai’s first public remarks on the PR nightmare that erased more than $70 billion in market value in a single day earlier this week – and raised renewed concerns among critics who say Google’s employees are injecting their political bias into the technology.

Pichai added that Google was “already seeing a substantial improvement on a wide range of prompts.”

“No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” Pichai said. “And we’ll review what happened and make sure we fix it at scale.”

Gemini’s “woke” image disaster further damaged public trust in Google’s AI tools at a pivotal point for the company, which had just rebranded its chatbot and unveiled the image generation tool a few days earlier.

Google Gemini generated controversy by producing ahistorical images in response to user prompts. Google Gemini

Concerns about potential bias in Gemini’s model training were exacerbated after social media users resurfaced a set of politically charged tweets by Google Gemini’s product lead Jack Krawczyk. In the old posts, Krawczyk allegedly stated that “white privilege is f—king real” and America is rife with “egregious racism.”

Scrutiny wasn’t limited to Gemini’s image outputs. The chatbot generated further outrage with its text responses after it refused to condemn pedophilia and claimed there was “no right or wrong answer” when asked if Adolf Hitler or Elon Musk is worse.

The Google boss said the company would take “a clear set of actions” in response to the crisis, including unspecified “structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations.”

The problem is expected to take “a few weeks” to fix, one of Google’s top AI executives said at an event earlier this week.

Lire la source

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd