Incidents involved as both Developer and Deployer

Incident 64535 Reports
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation


Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.


Incident 4529 Reports
Defamation via AutoComplete


Google's autocomplete feature alongside its image search results resulted in the defamation of people and businesses.


Incident 7128 Reports
Google admits its self driving car got it wrong: Bus crash was caused by software


On February 14, 2016, a Google autonomous test vehicle partially responsible for a low-speed collision with a bus on El Camino Real in Google’s hometown of Mountain View, CA.


Incident 1927 Reports
Sexist and Racist Google Adsense Advertisements


Advertisements chosen by Google Adsense are reported as producing sexist and racist results.


Incidents Harmed By

Incident 46714 Reports
Google's Bard Shared Factually Inaccurate Info in Promo Video


Google's conversational AI "Bard" was shown in the company's promotional video providing false information about which satellite first took pictures of a planet outside the Earth's solar system, reportedly causing shares to temporarily plummet.


Incident 5671 Report
Deepfake Voice Exploit Compromises Retool's Cloud Services


In August 2023, a hacker reportedly was successful in breaching Retool, an IT company specializing in business software solutions, impacting 27 cloud customers. The attacker appears to have initiated the breach by sending phishing SMS messages to employees and later used an AI-generated deepfake voice in a phone call to obtain multi-factor authentication codes. The breach seems to have exposed vulnerabilities in Google's Authenticator app, specifically its cloud-syncing function, further enabling unauthorized access to internal systems.


Incidents involved as Developer

Incident 62312 Reports
Google Bard Allegedly Generated Fake Legal Citations in Michael Cohen Case


Michael Cohen, former lawyer for Donald Trump, claims to have used Google Bard, an AI chatbot, to generate legal case citations. These false citations were unknowingly included in a court motion by Cohen's attorney, David M. Schwartz. The AI's misuse highlights emerging risks in legal technology, as AI-generated content increasingly infiltrates professional domains.


Incident 4693 Reports
Automated Adult Content Detection Tools Showed Bias against Women Bodies


Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.


Incident 121 Report
Common Biases of Vector Embeddings


Researchers from Boston University and Microsoft Research, New England demonstrated gender bias in the most common techniques used to embed words for natural language processing (NLP).


Incident 811 Report
Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers


A study by the University of Toronto, the Vector Institute, and MIT showed the input databases that trained AI systems used to classify chest X-rays led the systems to show gender, socioeconomic, and racial biases.


Related Entities