Defining an "AI Incident"
The commercial air travel industry owes much of its increasing safety to systematically analyzing and archiving past accidents and incidents within a shared database. In aviation, an accident is a case where substantial damage or loss of life occurs. Incidents are cases where the risk of an accident substantially increases. For example, when a small fire is quickly extinguished in a cockpit it is an "incident" but if the fire burns crew members in the course of being extinguished it is an "accident." The FAA aviation database indexes flight log data and subsequent expert investigations into comprehensive examinations of both technological and human factors. In part due to this continual self-examination, air travel is one of the safest forms of travel. Decades of iterative improvements to safety systems and training have decreased fatalities 81 fold since 1970 when normalized for passenger miles.
Where the aviation industry has clear definitions, computer scientists and philosophers have long debated foundational definitions of artificial intelligence. In the absence of clear lines differentiating algorithms, intelligence, and the harms they may directly or indirectly cause, this database adopts an adaptive criteria for ingesting "incidents" where reports are accepted or rejected on the basis of a growing rule set.
An prospective incident will be considered for inclusion to the database if the following are all true,
- An identifiable intelligent system is involved
- Real harms did or could reasonably result from the identified behavior, decision, or event
Prospective incidents will be rejected from the database under the following conditions,
- It is a thought experiment not tied to a real intelligent system
Additional rules for including or rejecting prospective incidents will be added through time to ensure the integrity of the dataset. However, the database will err towards including borderline incidents rather than excluding them. Our working definition for "AI Incidents" is the following,
AI incidents are events or occurrences in real life that caused or had the potential to cause physical, financial, or emotional harm to people, animals or the environment.
Initial Collection Methodology
The aviation industry in the United States is required by law to report incidents and accidents to the Federal Aviation Administration. There is no compulsory reporting requirement for intelligent systems, so the AIID is built on the incidents that have gained sufficient notoriety to be covered in either the popular or the research press. As a result, you should consider the database to be representative of "public incidents", rather than all incidents that are known to have occured throughout the artificial intelligence industry.
The current database is dominated by the initial collection of incident reports, which were assembled with the following methodology by research assistant Sam Yoon,
- An initial set of links were assembled by Yampolski and Olsson.
- The links indicated key words to find related results in a series of Google searches.
- Potentially relevant links in the top three pages of Google search results were opened and read to see whether they have quality information about the event described by the sources. If relevant, they were added to the dataset.
- Other relevant looking incidents that were identified during the search process were included in the data set
- There were certain types of results that were not included because they were either hypothetical, academic and/or did not cause real harm
Download the Index
The public index is available for export as a document database. Please contact us for more information and be sure to let us know about your work so we can list it on this page.
Citing the Database as a Whole
We invite you to cite:
McGregor, S. (2021) Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. In Proceedings of the Thirty-Third Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-21). Virtual Conference.
The pre-print is available on arXiv.
Citing a Specific Incident
Every incident has its own suggested citation that credits both the submitter(s) of the incident and the editor(s) of the incident. The submitters are the people that submitted reports associated with the incident and their names are listed in the order in which their submissions were added to the AIID. Since reports can be added to an incident record through time, our suggested citation format includes the access date. You can find incident citations at
While formal AI incident research is relatively new, a number of people have been collecting what could be considered incidents. These include,
- Awesome Machine Learning Interpretability: AI Incident Tracker
- AI and Algorithimic Incidents and Controversies of Charlie Pownall
- Map of Helpful and Harmful AI
If you have an incident resource that could be added here, please contact us.