Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • TaxonomĂ­as
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Descubrir
Enviar
  • Bienvenido a la AIID
  • Descubrir Incidentes
  • Vista espacial
  • Vista Tabular
  • Vista de lista
  • Entidades
  • TaxonomĂ­as
  • Enviar Informes de Incidentes
  • Ranking de Reportadores
  • Blog
  • Resumen de noticias de IA
  • Control de Riesgos
  • Incidente aleatorio
  • Registrarse
Colapsar
Blog de AIID

AI Incident Roundup – November and December 2025 and January 2026

Posted 2026-02-02 by Daniel Atherton.

Le Front de l'Yser (Flandre), Georges Lebacq, 1917

đź—„ Trending in the AIID

Between the beginning of November 2025 and the end of January 2026, the AI Incident Database added a batch of 108 new incident IDs: Incident 1254 through Incident 1361. While many of these incidents are contemporaneous, some occurred a while back, including one from 2020. The full list of additions can be seen at the bottom of this article.

Deepfake-enabled fraud is now a default business model

The largest, most repetitive thread in this update window, which is also in keeping with previous roundups, is impersonation-for-profit, especially "investment opportunity" scams that exploit legitimacy from familiar faces and trusted formats. The dataset includes repeated variations on the same playbook. A public figure (usually a politician, media personality in some specific niche, celebrity, or business leader) "endorses" a product or platform; the content is distributed via high-reach social networks; victims are routed into a funnel that ends in money transfer.

This shows up across regions and targets. For example, Thai news presenters and business figures (Incidents 1254, 1255), Swedish investors (1256), Australia (1260 and 1261), Greece's finance minister (1271), Guernsey, Malta, and Cyprus via government-figure impersonations tied to crypto fraud (1288, 1289, 1290, 1293), and a long tail of Elon Musk-branded variants (1276, 1306, 1317, 1325, 1328).

Health-adjacent deception also shows up a lot. Deepfake "doctor" endorsements and wellness product marketing (1314, 1341, and 1359), and hybrid scams that blend "medical claims" with conversion funnels (1317). The throughline is that synthetic media must only be plausible enough, for long enough, to move someone from noticing any given piece of content and then taking action on it. It's tempting to think of deepfakes as just a content problem, but increasingly they are readable as the front end of an industrial fraud stack. They are slotting neatly into platform-scale ad targeting and optimization (1268), so that impersonation becomes repeatable infrastructure. They can be produced at low cost and tweaked in response to what actually draws clicks and messages, and then reliably routed into cash-transfer funnels.

Synthetic sexual harm is expanding in both scale and operationalization

Another high-signal cluster is non-consensual sexual imagery and CSAM-related harms.

This update window includes multiple reported school-centered incidents involving minors (1266, 1315, 1348, 1350, 1351, 1352, 1354), harassment campaigns and political intimidation (1301), and platform-level commercialization harms (1335). The reporting on Grok's reply prompts incident (1329) also belongs here, because it shows how generative systems can become "on-demand" production spaces for sexualized content in high-visibility social platforms. If read as a group, these records point to a hard truth. The images are in and of themselves harmful, but they also feed a downstream ecosystem of social humiliation and coercion, one in which distribution is effectively permanent and institutions often cannot respond with the timeliness that victims need and deserve.

Institutional misuse and "official" credibility failures are becoming part of the harm chain

A portion of these additions involve credibility-bearing institutions, such as government agencies, functioning as inadvertent amplifiers. Their official reach and procedural legitimacy can cause misinformation or mishandled data to circulate more widely and carry greater force than it otherwise would.

Some are classic "AI in the loop" governance failures. For example, an Argentine court allegedly annulled a conviction after a judge purportedly used ChatGPT without disclosure (1257). A Canadian national tax chatbot reportedly gave incorrect guidance at scale (1310). In the U.S., CISA's acting director reportedly uploaded sensitive material into a public ChatGPT instance (1360). In another instance, a reported false earthquake alert (1303) and a purported AI-generated forecast graphic error (1332) show how automated outputs can become "official reality" simply by being published.

Some incidents are about the performative use of AI. The AIID does not generally catalogue political satire involving AI. It does, however, record cases where synthetic media, especially when disseminated by institutional actors, blurs the distinction between authentic and manipulated evidence in ways that plausibly erode public trust. For instance, when the White House allegedly shared an altered arrest photo (1357). And in other high-profile cases within this same overarching context, a swirl of manipulated visuals and false attributions reportedly circulated widely enough to complicate public understanding of the Minneapolis killings of Renee Good (1334) and, later, Alex Pretti (1358), all examples of how synthetic content and misattributed imagery can muddy the evidentiary record and collective sense-making.

Consumer-facing chatbots keep producing high-stakes wrongness

The list also contains several incidents where chatbots are reportedly implicated in dangerous or consequential outputs. Some examples include the persistent allegations of encouragement of self-harm (1259); allegedly incorrect financial guidance for users in the United Kingdom (1279); and purportedly harmful medical advice linked to negative outcomes in India (1281). There are also incidents centered on intellectual property and attribution (1278, 1294, 1298), which matter because they reveal how these systems can produce "confident" outputs that are legally and epistemically unstable.

Physical-world autonomy and safety

Finally, this update window includes a block of incidents that include autonomous vehicle behavior and sensor-driven response systems (beyond the above-mentioned earthquake and weather forecast incidents).

Waymo has reportedly been implicated across multiple cases (1269, 1300, 1326, 1337, 1361), ranging from alleged collisions to regulatory probes and operational failures during emergencies. Separately, surveillance and alert systems reportedly triggered high-friction, high-cost responses, such false gun alerts (1267, 1312). These cases are especially important because even "minor" errors can produce large institutional reactions, such as lockdowns and police deployments.

What this batch suggests

These 108 additions indicate several things that should either be taken into consideration or serve as helpful reminders.

  • Harm is often mediated through credibility surfaces, such as familiar faces and names and seemingly official accounts.
  • The dominant threat pattern should not be thought of as "superintelligence." A more apt description is "industrialized plausibility": cheap realism + distribution + weak verification = industrialized credibility capture.
  • The most severe impacts frequently come from the collision of AI outputs with human institutions, such as courts and schools, where the cost of being wrong is high and correction is slow.

The AIID exists to make these patterns visible across jurisdictions and domains. This update window is a snapshot of the wider picture of the ongoing integration of generative AI into contemporary life, but it is a reminder that harm events are increasingly systematized and becoming infrastructural artifacts, even as we rely on public narrations of high-visibility incidents to understand the underlying systems.

🗞️ New Incident IDs in the Database

  • Incident 1254 - Purported AI Deepfake Reportedly Impersonated Thai PBS World Anchor and Miss Universe CEO in Fraudulent Investment Video (6/24/2025)
  • Incident 1255 - Purported Deepfake Reportedly Circulated on Facebook Impersonating Thai PBS World Anchors and Business Figures to Solicit Investments (8/28/2025)
  • Incident 1256 - Purportedly AI-Generated Deepfake Investment Ads Defrauded 5,000 Swedish Investors of 500 Million SEK (10/24/2025)
  • Incident 1257 - Argentine Court Reportedly Annuls Criminal Conviction After Judge Allegedly Used ChatGPT to Draft Ruling Without Disclosure (6/4/2025)
  • Incident 1258 - Purported Deepfake Mimicking RTÉ Broadcast Falsely Announced Irish Presidential Candidate Catherine Connolly's Withdrawal (10/22/2025)
  • Incident 1259 - ChatGPT Allegedly Encouraged 23-Year-Old Texas User's Suicide During Extended Conversations (7/25/2025)
  • Incident 1260 - Purported Deepfake of Andrew Forrest Used to Promote Fraudulent 'Quantum AI' Crypto Platform on Facebook (1/27/2024)
  • Incident 1261 - Alleged AI-Generated Deepfake of Western Australia Premier Roger Cook Used in YouTube Investment Scam (11/8/2025)
  • Incident 1262 - YouTube Channel Reportedly Posts Purported Deepfake Video of Rajat Sharma Announcing India-Bangladesh Conflict (9/29/2025)
  • Incident 1263 - Chinese State-Linked Operator (GTG-1002) Reportedly Uses Claude Code for Autonomous Cyber Espionage (11/13/2025)
  • Incident 1264 - Rep. Mike Collins's Campaign Allegedly Produced Deepfake of Sen. Jon Ossoff Supporting the Government Shutdown (11/10/2025)
  • Incident 1265 - NTB Report on Telenor Security Findings Was Withdrawn After AI Tool Allegedly Introduced Fabricated Quotes (10/28/2025)
  • Incident 1266 - Purportedly AI-Generated Sexual Images of At Least 400 Minors at Zacatecas School Were Reportedly Created and Sold Online (11/10/2025)
  • Incident 1267 - Omnilert AI Reportedly Triggered False Gun Alert at Parkville High, Prompting Student Relocation (11/7/2025)
  • Incident 1268 - Meta's Automated Ad and Targeting Systems Reportedly Enabled Large-Scale Fraud Revenue (11/6/2025)
  • Incident 1269 - Waymo Autonomous Vehicle Reportedly Ran Over and Killed a Cat in San Francisco (10/27/2025)
  • Incident 1270 - Multiple Purported AI-Assisted Cheating Incidents Reported Across South Korea's SKY Universities During October 2025 Midterms (10/15/2025)
  • Incident 1271 - Purported Deepfake of Greek Finance Minister Kyriakos Pierrakakis Reportedly Used in Facebook Investment Scam (11/14/2025)
  • Incident 1272 - Purportedly AI-Generated Home Intruder Videos Allegedly Prompt Dozens of Dutch Police Call-Outs (10/5/2025)
  • Incident 1273 - Purportedly AI-Generated Fake Videos of Louvre Heist Reportedly Circulated Widely Online (10/26/2025)
  • Incident 1274 - AI-Powered Taco Bell Drive-Thru Reportedly Disrupted by Viral Prank Ordering 18,000 Water Cups
  • Incident 1275 - Purportedly AI-Enhanced Phishing Campaign Allegedly Impersonates Australian Government Services in Large-Scale Welfare Scam (11/17/2025)
  • Incident 1276 - Ottawa Couple Reportedly Loses CA$177,023 After Purported Deepfake Elon Musk Investment Scam (10/1/2023)
  • Incident 1277 - Alleged Harmful Outputs and Data Exposure in Children's AI Products by FoloToy, Miko, and Character.AI (11/21/2025)
  • Incident 1278 - ChatGPT Reportedly Found to Reproduce Protected German Lyrics in Copyright Case (11/11/2025)
  • Incident 1279 - Prominent AI Chatbots Allegedly Produced Incorrect UK Financial and ISA Guidance (11/18/2025)
  • Incident 1280 - Reported Use of AI Voice and Identity Manipulation in the Ongoing 'Phantom Hacker' Fraud Scheme (10/20/2023)
  • Incident 1281 - Alleged Harmful Health Outcomes Following Reported Use of Purported ChatGPT-Generated Medical Advice in Hyderabad (11/10/2025)
  • Incident 1282 - Reported Disqualification of Two Books from the Ockham New Zealand Book Awards Due to Alleged AI-Generated Cover Art (11/17/2025)
  • Incident 1283 - Purported AI-Enabled Pro-Russian Influence Campaign Centered on Burkina Faso's Ibrahim TraorĂ© and Disseminated Across African Media (9/30/2025)
  • Incident 1284 - Secret Desires AI Platform Reportedly Exposed Nearly Two Million Sensitive Images in Cloud Storage Leak (11/19/2025)
  • Incident 1285 - Purportedly AI-Generated Jason Momoa Deepfake Used in Romance Scam Reportedly Defrauding British Widow of $600,000 (11/29/2025)
  • Incident 1286 - Purportedly AI-Assisted Citation Errors Allegedly Found in Newfoundland and Labrador's 2025 Health Workforce Report by Deloitte (5/29/2025)
  • Incident 1287 - Purported Deepfake-Based Facebook Impersonation Reportedly Targets Daughter of Scot in Coma (11/26/2025)
  • Incident 1288 - Purported Deepfake Video and Fake News Articles Allegedly Used to Impersonate Guernsey's Chief Minister in Investment Scam (8/1/2025)
  • Incident 1289 - Malta's Prime Minister Robert Abela Reportedly Deepfaked by a Ukrainian National in Cryptocurrency Fraud Targeting Local Residents (7/28/2025)
  • Incident 1290 - Alleged Fabricated News Sites and Deepfakes Impersonated Maltese Ministers, Financial Experts, and Media to Promote NethertoxAGENT Fraud (10/27/2025)
  • Incident 1291 - South Korean Fraud Ring Allegedly Used Deepfake Identities to Traffic Victims into Cambodia Scam Operations (8/1/2024)
  • Incident 1292 - Glasgow Man Allegedly Used AI Tool to Create and Share Non-Consensual Deepfake Nude Images of Former Classmate (2/1/2024)
  • Incident 1293 - Purported Deepfake Impersonating Cyprus President Nikos Christodoulides Reportedly Defrauded Citizens of Thousands of Euros (12/6/2025)
  • Incident 1294 - The New York Times Sued Perplexity for Allegedly Using Copyrighted Content and Generating False Attributions (12/5/2025)
  • Incident 1295 - Japanese Teen Allegedly Uses AI-Generated Program to Breach Kaikatsu Frontier and Leak Data of 7.3 Million Customers (1/18/2025)
  • Incident 1296 - Attacker Reportedly Bypasses AI Safety Filters to Obtain Guidance for Non-Fatal Hammer Assault in Denmark (2/5/2025)
  • Incident 1297 - Blogger Milagro Gramz Allegedly Promoted AI-Generated Pornographic Deepfake Targeting Megan Thee Stallion (10/30/2024)
  • Incident 1298 - Perplexity AI Reportedly Accused in Federal Lawsuit of Purported Copyright Infringement and False Attribution of Chicago Tribune Content (12/4/2025)
  • Incident 1299 - Bodycam Footage Reportedly Contradicted Purportedly ChatGPT-Generated Use-of-Force Narrative by Immigration Agent (10/3/2025)
  • Incident 1300 - Waymo Self-Driving Vehicles Reportedly Passed Stopped School Buses at Least 19 Times, Prompting NHTSA Probe (12/4/2025)
  • Incident 1301 - Purported AI-Generated Sexual Deepfakes Allegedly Deployed in Transnational Harassment Campaign Targeting Hong Kong Exiles (11/11/2025)
  • Incident 1302 - Reported Viral AI-Generated Photo Purportedly Shows Donald Trump Using a Walker (12/11/2025)
  • Incident 1303 - USGS ShakeAlert System Reportedly Generated False Earthquake Alert Affecting Nevada and California (12/4/2025)
  • Incident 1304 - Whirlpool Reportedly Used AI-Altered Footage of North Carolina State Senator DeAndrea Salvador in Brazilian Advertisement (6/1/2025)
  • Incident 1305 - UK Facial Recognition System Reportedly Exhibits Higher False Positive Rates for Black and Asian Subjects (12/5/2025)
  • Incident 1306 - Florida Couple Reportedly Loses $45,000 in Alleged AI-Generated Elon Musk Impersonation Scam (12/15/2025)
  • Incident 1307 - Grok AI Reportedly Generated Fabricated Civilian Hero Identity During Bondi Beach Shooting (12/15/2025)
  • Incident 1308 - Springer Nature Book 'Mastering Machine Learning: From Basics to Advanced' Reportedly Published With Numerous Purportedly Nonexistent or Incorrect Citations (4/18/2025)
  • Incident 1309 - Springer Nature Book 'Social, Ethical and Legal Aspects of Generative AI: Tools, Techniques and Systems' Reportedly Published With Numerous Purportedly Fabricated or Unverifiable Citations (6/17/2025)
  • Incident 1310 - Canada Revenue Agency (CRA) AI Chatbot 'Charlie' Reportedly Gave Incorrect Tax Filing Guidance at Scale (12/12/2025)
  • Incident 1311 - Peppermill Casino Facial Recognition System Reportedly Misidentified Individual, Leading to Wrongful Arrest in Reno (9/17/2023)
  • Incident 1312 - ZeroEyes AI Surveillance System Reportedly Flagged Clarinet as Gun, Triggering School Lockdown in Florida (12/9/2025)
  • Incident 1313 - Anthropic Claude AI Agent Reportedly Caused Financial Losses While Operating Office Vending Machine at Wall Street Journal Headquarters (12/18/2025)
  • Incident 1314 - Purported Deepfake Impersonating Doctor Allegedly Used in $200,000 Investment Scam Targeting Florida Grandmother (12/2/2025)
  • Incident 1315 - Purportedly AI-Generated Nude Images of Middle School Students Reportedly Circulated at Louisiana School (8/26/2025)
  • Incident 1316 - Google AI-Generated Search Summary Reportedly Falsely Implicated Canadian Musician in Sexual Offenses, Leading to Concert Cancellation (12/19/2025)
  • Incident 1317 - Purported Deepfake Impersonation of Elon Musk Used to Promote Fraudulent '17-Hour' Diabetes Treatment Claims (12/27/2025)
  • Incident 1318 - School's Suspected AI-Cheating Allegation Precedes Student's Reported Suicide in Greater Noida, India (12/23/2025)
  • Incident 1319 - Purported Deepfake Investment Video Reportedly Used in Scam That Defrauded Turkish Couple of 1.5 Million Lira (~$35,000 USD) (12/26/2025)
  • Incident 1320 - Purportedly AI-Manipulated Image Reported to Falsely Depict Taiwanese Politician Kao Chia-yu Posing With PRC Flag (12/25/2025)
  • Incident 1321 - Shilpa Shetty Alleges AI-Enabled Impersonation and Misuse of Likeness in Mumbai High Court Filing (11/27/2025)
  • Incident 1322 - Senior Kerala Congress Leader N. Subrahmanian Reportedly Booked for Sharing Purportedly AI-Generated Defamatory Image of Chief Minister Pinarayi Vijayan (12/26/2025)
  • Incident 1323 - Madhya Pradesh Congress Alleges AI-Generated Images Were Submitted in National Water Award Process (12/28/2025)
  • Incident 1324 - Pieces Technologies' Clinical AI Systems Allegedly Marketed With Misleading Performance Claims (9/18/2024)
  • Incident 1325 - Reported AI-Generated Deepfake Videos Impersonating Elon Musk and Dragon’s Den Allegedly Used in Cryptocurrency Investment Scam Targeting Canadian Victims (12/21/2025)
  • Incident 1326 - Waymo Robotaxis Allegedly Contributed to Traffic Gridlock During San Francisco PG&E Power Outage (12/20/2025)
  • Incident 1327 - Reported AI-Generated Deepfake Romance Scam Allegedly Used to Steal One Bitcoin From Recently Divorced Investor (12/31/2025)
  • Incident 1328 - Purported Deepfake Impersonating Elon Musk Allegedly Defrauded Elderly U.S. Woman of $50,000 via Gift Card–to-Crypto Scam (1/3/2026)
  • Incident 1329 - Grok Reportedly Generated and Distributed Nonconsensual Sexualized Images of Adults and Minors in X Replies (12/25/2025)
  • Incident 1330 - Purportedly AI-Generated 'Eric Langford' Missing Boy Scout Hoax Circulating Across Multiple Social Media Platforms (12/13/2025)
  • Incident 1331 - Purported Deepfake Videos Reportedly Impersonated Yanis Varoufakis on YouTube and Social Media (1/5/2026)
  • Incident 1332 - National Weather Service Reportedly Published AI-Generated Forecast Map With Fabricated Idaho Town Names (1/3/2026)
  • Incident 1333 - Purportedly AI-Generated Images and Videos Reportedly Spread Misinformation About Nicolás Maduro's Capture on X (1/3/2026)
  • Incident 1334 - Grok Reportedly Generated False 'Unmasked' Images of ICE Agent, Purportedly Triggering Online Misidentification and Harassment in Minneapolis (1/7/2026)
  • Incident 1335 - OpenDream AI Platform Reportedly Commercialized AI-Generated CSAM and Non-consensual Deepfake Sexual Images (12/1/2023)
  • Incident 1336 - BJP Used Deepfake Videos of Manoj Tiwari to Target Haryanvi Voters in 2020 Delhi Election (2/7/2020)
  • Incident 1337 - Waymo Robotaxi Reportedly Transported Undetected Person Trapped in Trunk in Los Angeles (12/8/2025)
  • Incident 1338 - Purported Deepfake Endorsements Reportedly Used to Promote Fraudulent Health and Investment Products in Montenegro and Bosnia and Herzegovina (12/24/2025)
  • Incident 1339 - Purportedly AI-Cloned Voice Allegedly Used to Defraud Play School Owner of ₹97,500 (~$1.080 USD) in Indore, India (1/6/2026)
  • Incident 1340 - Purported AI-Generated Image Falsely Depicting JD Vance and Usha Vance in Public Altercation Circulated on Social Media (12/9/2025)
  • Incident 1341 - Purported Deepfake Advertisement Falsely Depicting Physician Endorsement Used to Sell Lipedema Cream to U.S. Patient Beth Holland (12/3/2025)
  • Incident 1342 - Purported Deepfake Nude Images of Students Circulated Without Consent at Valencia Educational Institute (12/1/2024)
  • Incident 1343 - ICE AI Resume Screening Error Allegedly Routed Inexperienced Recruits Into Inadequate Training Pathways (1/14/2026)
  • Incident 1344 - Purported AI-Generated Images Depict Kate Garraway With Fictitious Partner (1/15/2026)
  • Incident 1345 - Purported Deepfake Video Allegedly Used to Harass Washington State Patrol Trooper (12/30/2025)
  • Incident 1346 - Purported AI-Generated Videos Depicted George Will Reportedly Commenting on Trump and Supreme Court Rulings (12/23/2025)
  • Incident 1347 - Automated Shuttle Bus Was Reportedly Rear-Ended During U.S. Department of Transportation Demonstration Ride in Washington, D.C. (1/11/2026)
  • Incident 1348 - Purported Deepfake Explicit Images of Middle School Students Allegedly Created and Circulated Using Mobile App in Goffstown, New Hampshire (10/7/2025)
  • Incident 1349 - AI Training Dataset for Detecting Nudity Allegedly Found to Contain CSAM Images of Identified Victims (10/24/2025)
  • Incident 1350 - Reported Use of AI Apps to Create Sexualized Deepfake Images of High School Students at Cascade High School in Iowa (3/25/2025)
  • Incident 1351 - Alleged Use of AI to Create Sexualized Deepfake Images of Middle School Students Under Investigation in Bucks County, Pennsylvania (3/15/2025)
  • Incident 1352 - Malaysian Teenager Allegedly Arrested for Creating and Selling AI-Generated Deepfake Images of Schoolmates and Alumni in Johor (4/8/2025)
  • Incident 1353 - ICE Facial Recognition App Mobile Fortify Reportedly Misidentified Woman Twice During Immigration Enforcement in Oregon (10/15/2025)
  • Incident 1354 - Purportedly AI-Altered Fake Nude Images of High School Girls and Women Reportedly Created and Disseminated in Pensacola, Florida (10/10/2024)
  • Incident 1355 - Reported AI Impersonations of Pastors Used in Online Donation and Influence Scams (1/5/2026)
  • Incident 1356 - Urban VPN Proxy Browser Extension Reportedly Harvested and Sold Private AI Chatbot Conversations via Silent Update (7/9/2025)
  • Incident 1357 - White House Reportedly Shares Purportedly AI-Altered Arrest Photo Depicting Minnesota Protester Nekima Levy Armstrong as Crying (1/22/2026)
  • Incident 1358 - Purportedly AI-Altered Images Reportedly Distort Evidence After Minneapolis Shooting of ICU Nurse Alex Pretti (1/24/2026)
  • Incident 1359 - Reported Deepfake Influencers on TikTok Allegedly Used to Promote Fraudulent Wellness Products (3/4/2025)
  • Incident 1360 - CISA Acting Director Reportedly Uploaded Sensitive Government Documents to Public ChatGPT Instance (7/15/2025)
  • Incident 1361 - Waymo Autonomous Vehicle Reportedly Struck Child Near Elementary School in Santa Monica, California (1/23/2026)

👇 Diving Deeper

  • Check out the Table View and List View for different ways to see and sort all incidents.
  • Explore clusters of similar incidents in Spatial Visualization.
  • Learn about alleged developers, deployers, and harmed parties in Entities Page.

🦾 Support our Efforts

Still reading? Help us change the world for the better!

  1. Share this newsletter on LinkedIn, Twitter, and Facebook.
  2. Submit incidents to the database.
  3. Contribute to the database’s functionality.

InvestigaciĂłn

  • DefiniciĂłn de un “Incidente de IA”
  • DefiniciĂłn de una “Respuesta a incidentes de IA”
  • Hoja de ruta de la base de datos
  • Trabajo relacionado
  • Descargar Base de Datos Completa

Proyecto y Comunidad

  • Acerca de
  • Contactar y Seguir
  • Aplicaciones y resĂşmenes
  • GuĂ­a del editor

Incidencias

  • Todos los incidentes en forma de lista
  • Incidentes marcados
  • Cola de envĂ­o
  • Vista de clasificaciones
  • TaxonomĂ­as

2024 - AI Incident Database

  • Condiciones de uso
  • PolĂ­tica de privacidad
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d690bcc