Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
AIID Blog

Submit Your AI Incident Research to IAAI!

Posted 2024-08-12 by Kevin Paeth.

Answer the Questions: What happened and why?  

Who: AI, human factors, safety, and ethics researchers 

Where: Innovative Applications of Artificial Intelligence (IAAI) at AAAI in Philadelphia, PA 

When: February 25 - March 4, 2025 

Why: Building the safety culture surrounding artificial intelligence via impactful proceedings 

Award: The Digital Safety Research Institute (DSRI) of UL Research Institutes is sponsoring a US$1k best paper award that also comes with up to US$1k in additional travel support for presenting authors. 

  • Analyze the development and deployment processes and artifacts in relation to harms realized in the real world for one or more AI incidents. 
  • Analyze AI incident trends to understand where there are open problems in the deployment of AI systems 
  • Detail best practices in producing and deploying systems to prevent or mitigate AI incidents…(Read More) 

Mitigating harmful AI incidents requires that we not only understand the relevant technical factors, but also the sociotechnical context that affects the assumptions and decisions present at each stage of research, development, and deployment. IAAI and the Digital Safety Research Institute assembled a program committee of AI, human factors, safety, and ethics researchers to provide the authors with the multi-faceted feedback necessary to progress research on this complex and critical topic.  

In this final short paper track, we encourage papers analyzing the factors related to AI incidents and the best practices for preventing or mitigating their recurrence. These papers will be reviewed based on different criteria than the longer papers of the main IAAI tracks. The papers must (a) analyze the development and deployment processes and artifacts in relation to harms realized in the real world for one or more AI incidents, (b) analyze AI incident trends to understand where there are open problems in the deployment of AI systems, and/or (c) detail best practices in producing and deploying systems to prevent or mitigate AI incidents.  

Papers will be reviewed by a program committee of AI, human factors, safety, and ethics researchers with a particular focus on whether the paper informs the safe practice of deploying intelligent systems in the world.  All papers will be added to the AI Incident Database for the referenced AI incidents.    

Papers in this track may be between four (4) pages and (6) pages long, including references, in the prescribed AAAI 2025 Author Kit. Papers of at most two (2) pages may also be submitted and considered exclusively for presentation as a poster.  

All accepted submissions will be added to the AI Incident Database for referenced AI Incidents.

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 86fe0f5