About
Why "AI Incidents"?
Intelligent systems are currently prone to unforeseen and often dangerous failures when they are deployed to the real world. Much like the transportation sector before it (e.g., FAA and FARS) and more recently computer systems, intelligent systems require a repository of problems experienced in the real world so that future researchers and developers may mitigate or avoid repeated bad outcomes.
What is an Incident?
The initial set of more than 1,000 incident reports have been intentionally broad in nature. Current examples include,
- An autonomous car kills a pedestrian
- A trading algorithm causes a market "flash crash" where billions of dollars transfer between parties
- A facial recognition system causes an innocent person to be arrested
You are invited to explore the incidents collected to date, view the complete listing, and submit additional incident reports. Researchers are invited to review our working definition of AI incidents.
Current and Future Users
The database is a constantly evolving data product and collection of applications.
- Current Users include system architects, industrial product developers, public relations managers, researchers, and public policy researchers. These users are invited to use the Discover application to proactively discover how recently deployed intelligent systems have produced unexpected outcomes in the real world. In so doing, they may avoid making similar mistakes in their development.
- Future Uses will evolve through the code contributions of the open source community, including additional database summaries and taxonomies.
When Should You Report an Incident?
When in doubt of whether an event qualifies as an incident, please submit it! This project is intended to converge on a shared definition of "AI Incident" through exploration of the candidate incidents submitted by the broader community.
Board of Directors
The incident database is managed in a participatory manner by persons and organizations contributing code, research, and broader impacts. If you would like to participate in the governance of the project, please contact us and include your intended contribution to the AI Incident Database.
Voting Members
-
Patrick Hall: Patrick was recently principal scientist at bnh.ai, a D.C.-based law firm specializing in AI and data analytics. Patrick is also assistant professor at the George Washington University School of Business. Before co-founding bnh.ai, Patrick led responsible AI efforts at the machine learning software firm H2O.ai, where his work resulted in one of the world’s first commercial solutions for explainable and fair machine learning. Among other academic and technology media writing, Patrick is the primary author of popular e-books on explainable and responsible machine learning. Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.
Contributions: Patrick is a leading contributor of incident reports to the AI Incident Database and provides strategic leadership for the board. -
Heather Frase: Heather Frase, PhD is a Senior Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where she works on AI Assessment. She also serves as an unpaid advisor to Meta’s Open Loop project, providing expertise on implementation of the National Institute for Standards and Technology’s AI Risk Management Framework. Prior to joining CSET, Heather spent eight years providing data analytics, computational modeling, Machine Learning (ML), and Artificial Intelligence (AI) support for Intelligence, Defense, and Federal contracts. Additionally, Heather spent 14 years at the Institute for Defense Analyses (IDA), supporting Director, Operational Test and Evaluation (DOT&E). At IDA she led analytic research teams to apply scientific, technological, and statistical expertise to develop data metrics and collection plans for operational tests of major defense systems, analyze test data, and produce assessments of operational effectiveness and suitability. She has a Ph.D. in Material Science from the California Institute of Technology and a BS in Physics from Miami University in Oxford Ohio.
Contributions: Heather develops AI incident research in addition to her oversight of the CSET taxonomy. -
Kristian J. Hammond: Kris Hammond is the Bill and Cathy Osborn Professor of Computer Science at Northwestern University and the co-founder of the Artificial Intelligence company Narrative Science, recently acquired by Salesforce. He is also the faculty lead of Northwestern’s CS + X initiative, exploring how computational thinking can be used to transform fields such as the law, medicine, education, and business. He is director of Northwestern’s Master of Science in Artificial Intelligence (MSAI). Most recently, Dr. Hammond founded the Center for Advancing Safety in Machine Intelligence (CASMI), a research hub funded by Underwriter’s Laboratories. CASMI is focused on operationalizing the design and evaluation of AI systems from the perspective of their impact on human life and safety. Contributions: Kris is developing a collaborative project centered on the case studies of incidents.
Emeritus Board
Emeritus board members are those that have particularly distinguished themselves in their service to the Responsible AI Collaborative. They hold no governance position within the organization.
-
Sean McGregor: Sean McGregor founded the AI Incident Database project and recently joined the Digital Safety Research Institute as a founding director. Prior to starting the Responsible AI Collaborative, Sean left a position as machine learning architect at the neural accelerator startup Syntiant so he could focus on the assurance of intelligent systems full time. Dr. McGregor's work spans neural accelerators for energy efficient inference, deep learning for speech and heliophysics, and reinforcement learning for wildfire suppression policy. Outside his paid work, Sean organized a series of workshops at major academic AI conferences on the topic of "AI for Good" and now seeks to safely realize the benefits of AI by bridging AI incident records to AI test programs.
-
Helen Toner: Helen Toner is Director of Strategy at Georgetown’s Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI. Helen has written for Foreign Affairs and other outlets on the national security implications of AI and machine learning for China and the United States, as well as testifying before the U.S.-China Economic and Security Review Commission. She is a member of the board of directors for OpenAI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne.
Collaborators
Responsible AI Collaborative: People that serve the organization behind the AI Incident Database.
Digital Safety Research Institute (DSRI): People affiliated with DSRI, which provides substantial support to the AIID program.
- Kevin Paeth is a lead with DSRI
- César Varela is a Full Stack Engineer
- Luna McNulty is a UX Engineer
- Pablo Costa is a Full Stack Engineer
- Clara Youdale Pinelli is a Front End Engineer
- Sean McGregor is a director with DSRI
Incident Editors: People that resolve incident submissions to the database and maintain them.
Additionally, Zachary Arnold made significant contributions to the incident criteria.
Taxonomy Editors: Organizations or people that have contributed taxonomies to the database.
Partnership on AI staff members:
Jingying Yang and Dr. Christine Custis contributed significantly to the early stages of the AIID.
Open Source Contributors: People that have contributed more than one pull request, graphics, site copy, or bug report to the AI Incident Database.
- Neama Dadkhahnikoo: Neama served as the volunteer executive director and board observer for the Responsible AI Collaborative
- Kit Harris: Kit served as board observer and provided strategic advice from his position as grant advisor.
- Alex Muscă
- Chloe Kam Developed the AIID logo
- JT McHorse
- Seth Reid
Incident Contributors: People that have contributed a large numbers of incidents to the database.
- Roman Lutz (Max Planck Institute for Intelligent Systems, formerly Microsoft)
- Patrick Hall (Burt and Hall LLP)
- Catherine Olsson (Google)
- Roman Yampolskiy (University of Louisville)
- Sam Yoon (as contractor to PAI, then with Deloitte Consulting, then with the Kennedy School of Government)
The following people have collected a large number of incidents that are pending ingestion.
- Zachary Arnold, Helen Toner, Ingrid Dickinson, Thomas Giallella, and Nicolina Demakos (Center for Security and Emerging Technology, Georgetown)
- Charlie Pownall via AI, algorithmic and automation incident and controversy repository (AIAAIC)
- Lawrence Lee, Darlena Phuong Quyen Nguyen, Iftekhar Ahmed (UC Irvine)
There is a growing community of people concerned with the collection and characterization of AI incidents, and we encourage everyone to contribute to the development of this system.
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.
View the Responsible AI Collaborative's Form 990 and tax-exempt application.