Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Blog de AIID

Submit Your AI Incident Research to IAAI!

Posted 2024-08-12 by Kevin Paeth.

Answer the Questions: What happened and why?  

Who: AI, human factors, safety, and ethics researchers 

Where: Innovative Applications of Artificial Intelligence (IAAI) at AAAI in Philadelphia, PA 

When: February 25 - March 4, 2025 

Why: Building the safety culture surrounding artificial intelligence via impactful proceedings 

Award: The Digital Safety Research Institute (DSRI) of UL Research Institutes is sponsoring a US$1k best paper award that also comes with up to US$1k in additional travel support for presenting authors. 

  • Analyze the development and deployment processes and artifacts in relation to harms realized in the real world for one or more AI incidents. 
  • Analyze AI incident trends to understand where there are open problems in the deployment of AI systems 
  • Detail best practices in producing and deploying systems to prevent or mitigate AI incidents…(Read More) 

Mitigating harmful AI incidents requires that we not only understand the relevant technical factors, but also the sociotechnical context that affects the assumptions and decisions present at each stage of research, development, and deployment. IAAI and the Digital Safety Research Institute assembled a program committee of AI, human factors, safety, and ethics researchers to provide the authors with the multi-faceted feedback necessary to progress research on this complex and critical topic.  

In this final short paper track, we encourage papers analyzing the factors related to AI incidents and the best practices for preventing or mitigating their recurrence. These papers will be reviewed based on different criteria than the longer papers of the main IAAI tracks. The papers must (a) analyze the development and deployment processes and artifacts in relation to harms realized in the real world for one or more AI incidents, (b) analyze AI incident trends to understand where there are open problems in the deployment of AI systems, and/or (c) detail best practices in producing and deploying systems to prevent or mitigate AI incidents.  

Papers will be reviewed by a program committee of AI, human factors, safety, and ethics researchers with a particular focus on whether the paper informs the safe practice of deploying intelligent systems in the world.  All papers will be added to the AI Incident Database for the referenced AI incidents.    

Papers in this track may be between four (4) pages and (6) pages long, including references, in the prescribed AAAI 2025 Author Kit. Papers of at most two (2) pages may also be submitted and considered exclusively for presentation as a poster.  

All accepted submissions will be added to the AI Incident Database for referenced AI Incidents.

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • 86fe0f5