開発者と提供者の両方の立場で関わったインシデント
インシデント 64535 レポート
Seeming Pattern of Gemini Bias and Sociotechnical Training Failures Harm Google's Reputation
2024-02-21
Google's Gemini chatbot faced many reported bias issues upon release, leading to a variety of problematic outputs like racial inaccuracies and political biases, including regarding Chinese and Indian politics. It also reportedly over-corrected racial diversity in historical contexts and advanced controversial perspectives, prompting a temporary halt and an apology from Google.
もっとインシデント 4529 レポート
Defamation via AutoComplete
2011-04-05
Google's autocomplete feature alongside its image search results resulted in the defamation of people and businesses.
もっとインシデント 7128 レポート
Google admits its self driving car got it wrong: Bus crash was caused by software
2016-09-26
On February 14, 2016, a Google autonomous test vehicle partially responsible for a low-speed collision with a bus on El Camino Real in Google’s hometown of Mountain View, CA.
もっとインシデント 1927 レポート
Sexist and Racist Google Adsense Advertisements
2013-01-23
Advertisements chosen by Google Adsense are reported as producing sexist and racist results.
もっと影響を受けたインシデント
インシデント 46714 レポート
Google's Bard Shared Factually Inaccurate Info in Promo Video
2023-02-07
Google's conversational AI "Bard" was shown in the company's promotional video providing false information about which satellite first took pictures of a planet outside the Earth's solar system, reportedly causing shares to temporarily plummet.
もっとインシデント 5671 レポート
Deepfake Voice Exploit Compromises Retool's Cloud Services
2023-08-27
In August 2023, a hacker reportedly was successful in breaching Retool, an IT company specializing in business software solutions, impacting 27 cloud customers. The attacker appears to have initiated the breach by sending phishing SMS messages to employees and later used an AI-generated deepfake voice in a phone call to obtain multi-factor authentication codes. The breach seems to have exposed vulnerabilities in Google's Authenticator app, specifically its cloud-syncing function, further enabling unauthorized access to internal systems.
もっとインシデント 7911 レポート
Google AI Error Prompts Parents to Use Fecal Matter in Child Training Exercise
2024-09-09
Google's AI Overview feature mistakenly advised parents to use human feces in a potty training exercise, misinterpreting a method that uses shaving cream or peanut butter as a substitute. This incident is another example of an AI failure in grasping contextual nuances that can lead to potentially harmful, and in this case unsanitary, recommendations. Google has acknowledged the error.
もっとIncidents involved as Developer
インシデント 62312 レポート
Google Bard Allegedly Generated Fake Legal Citations in Michael Cohen Case
2023-12-12
Michael Cohen, former lawyer for Donald Trump, claims to have used Google Bard, an AI chatbot, to generate legal case citations. These false citations were unknowingly included in a court motion by Cohen's attorney, David M. Schwartz. The AI's misuse highlights emerging risks in legal technology, as AI-generated content increasingly infiltrates professional domains.
もっとインシデント 4693 レポート
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2006-02-25
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
もっとインシデント 121 レポート
Common Biases of Vector Embeddings
2016-07-21
Researchers from Boston University and Microsoft Research, New England demonstrated gender bias in the most common techniques used to embed words for natural language processing (NLP).
もっとインシデント 811 レポート
Researchers find evidence of racial, gender, and socioeconomic bias in chest X-ray classifiers
2020-10-21
A study by the University of Toronto, the Vector Institute, and MIT showed the input databases that trained AI systems used to classify chest X-rays led the systems to show gender, socioeconomic, and racial biases.
もっと関連する組織
Microsoft
開発者と提供者の両方の立場で関わっ たインシデント
- インシデント 1022 レポート
Personal voice assistants struggle with black voices, new study shows
- インシデント 5871 レポート
Apparent Failure to Accurately Label Primates in Image Recognition Software Due to Alleged Fear of Racial Bias
Incidents involved as Developer
Amazon
開発者と提供者の両方の立場で関わったインシデント
- インシデント 1022 レポート
Personal voice assistants struggle with black voices, new study shows
- インシデント 5871 レポート
Apparent Failure to Accurately Label Primates in Image Recognition Software Due to Alleged Fear of Racial Bias
Incidents involved as Developer
OpenAI
開発者と提供者の両方の立場で関わったインシデント
- インシデント 3671 レポート
iGPT, SimCLR Learned Biased Associations from Internet Training Data
- インシデント 7181 レポート
OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
Incidents involved as Developer
Meta
開発者と提供者の両方の立場で関わったインシデント
- インシデント 7181 レポート
OpenAI, Google, and Meta Alleged to Have Overstepped Legal Boundaries for Training AI
- インシデント 7341 レポート
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites