Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer

Problème 3773

Incidents associés

Incident 64535 Rapports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation

Loading...
Google Pauses Gemini AI's Image Generator Over "Historical Inaccuracies"
ndtv.com · 2024

Google has temporarily stopped its Gemini Artificial Intelligence chatbot from generating images of people. This comes a day after the tech giant issued an apology for “inaccuracies” in historical depictions the chatbot was creating.

After Google released its revamped Gemini in some parts of the world on February 8, some users earlier this week posted screenshots on social media showing how the chatbot was inaccurately depicting white-dominated scenes with racially diverse characters.

For example, the Gemini AI chatbot depicted Nazi-era troops as people from diverse ethnic backgrounds.

The results generated by the tool led to criticism and questions about whether the company was over-correcting for the risk of racial bias in its AI model.

What did Google say about the controversy?

Soon after the controversy, Google released a statement on X (formerly Twitter), saying, “We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon.”

We're already working to address recent issues with Gemini's image generation feature. While we do this, we're going to pause the image generation of people and will re-release an improved version soon. https://t.co/SLxYPGoqOZ

— Google Communications (@Google_Comms) February 22, 2024

Earlier, on Thursday, Google said the team was aware that Gemini was offering inaccuracies in some historical image generation depictions.

It added, “We are working to improve these kinds of depictions immediately. Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it is missing the mark.”

We're aware that Gemini is offering inaccuracies in some historical image generation depictions. Here's our statement. pic.twitter.com/RfYXSgRyfz

— Google Communications (@Google_Comms) February 21, 2024

What is Google Gemini AI?

Formerly known as the Bard chatbot, Gemini is the latest addition to an ever-growing field of artificial intelligence.

It is a family of multimodal large language models designed for language, audio, code, and video understanding.

Powered by the Imagen 2 model, the Gemini AI chatbot allows users to generate high-quality images with text prompts.

Google officially released the Gemini AI last year on December 6. As it integrates natural language processing and image recognition, Gemini enables tasks such as image captioning and complex visual parsing. It doesn't require external OCR tools.

Lire la source

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd