Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • Taxonomías
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar

Problema 3813

Loading...
Google’s AI Chatbot Gemini Restricts Election-Related Queries
theguardian.com · 2024

Google is restricting its Gemini AI chatbot from answering election-related questions in countries where voting is taking place this year, limiting users from receiving information about candidates, political parties and other elements of politics.

"Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses," Google's India team stated on the company's site.

The company initially announced its plans for limiting election-related queries in a blog post last December, according to a Google spokesperson, and made a similar announcement regarding European parliamentary elections in February. Google's post on Tuesday pertained to India's upcoming election, while TechCrunch reported that Google confirmed it is rolling out the changes globally.

When asked questions such as "tell me about President Biden" or "who is Donald Trump," Gemini now replies: "I'm still learning how to answer this question. In the meantime, try Google search," or a similarly evasive answer. Even the less subjective question "how to register to vote" receives a referral to Google search.

Google is limiting its chatbot's capabilities ahead of a raft of high-stakes votes this year in countries including the US, India, South Africa and the UK. There is widespread concern over AI-generated disinformation and its influence on global elections, as the technology enables the use of robocalls, deepfakes and chatbot-generated propaganda.

"As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we're restricting the types of election-related queries for which Gemini will return responses."

Governments and regulators around the world have struggled to keep up with the advancements in AI and their threat to the democratic process, while big tech companies are under pressure to rein in the malicious use of their AI tools. Google's blogpost on Tuesday states that it is implementing multiple features, such as digital watermarking and content labels for AI-generated content, to prevent the spread of misinformation at scale.

Google's decision to restrict Gemini should be a reason to scrutinize the overall accuracy of the company's AI tools, argues Daniel Susser, an associate professor of information science at Cornell University.**

**"If Google's generative AI tools are too unreliable for conveying information about democratic elections, why should we trust them in other contexts, such as health or financial information?" Susser said in a statement. "What does that say about Google's long-term plans to incorporate generative AI across its services, including search?"

Gemini recently faced a heated backlash over its image-generation capabilities, as users began to notice the tool inaccurately generated images of people of color when given prompts for historical situations. These included depictions of people of color as Catholic popes and as German Nazi soldiers in the second world war. Google suspended some of Gemini's capabilities in response to the controversy, issuing statements apologizing and saying that it would tweak its technology to fix the issue.

The Gemini scandal involved issues around AI-generated misinformation, but it also showed how major AI firms are finding themselves in the center of culture wars and under intense public scrutiny. Republican lawmakers accused Google of promoting leftist ideology through its AI tool, with the Missouri senator Josh Hawley calling on its CEO, Sundar Pichai, to testify under oath to Congress about Gemini.

Prominent AI companies, including OpenAI and Google, increasingly appear willing to block their chatbots from engaging with sensitive questions that could result in a public relations backlash. Even the decision of which questions these companies block is fraught, however, and a 404 Media report from earlier this month found that Gemini would not answer questions such as "what is Palestine" but would engage with similar queries about Israel.

Leer la Fuente

Investigación

  • Definición de un “Incidente de IA”
  • Definición de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resúmenes
  • Guía del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envío
  • Vista de clasificaciones
  • Taxonomías

2024 - AI Incident Database

  • Condiciones de uso
  • Política de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd