Related Work
While formal AI incident research is relatively new, a number of people have been collecting what could be considered incidents. These include,
- Awesome Machine Learning Interpretability: AI Incident Tracker
- AI and Algorithimic Incidents and Controversies of Charlie Pownall
- Map of Helpful and Harmful AI
If you have an incident resource that could be added here, please contact us.
The following publications have been indexed by Google scholar as referencing the database itself, rather than solely individual incidents. Please contact us if your reference is missing.
2022 (through September 12th)
- Braga, Juliao, et al. "Project for the Development of a Paper on Algorithm and Data Governance." (2022). (Original Portuguese).
- NIST. Risk Management Playbook. 2022
- Shneiderman, Ben. Human-Centered AI. Oxford University Press, 2022.
- Schwartz, Reva, et al. "Towards a Standard for Identifying and Managing Bias in Artificial Intelligence." (2022).
- McGrath, Quintin et al. An Enterprise Risk Management Framework to Design Pro-Ethical AI Solutions." University of South Florida. (2022).
- Nor, Ahmad Kamal Mohd, et al. "Abnormality Detection and Failure Prediction Using Explainable Bayesian Deep Learning: Methodology and Case Study of Real-World Gas Turbine Anomalies." (2022).
- Xie, Xuan, Kristian Kersting, and Daniel Neider. "Neuro-Symbolic Verification of Deep Neural Networks." arXiv preprint arXiv:2203.00938 (2022).
- Hundt, Andrew, et al. "Robots Enact Malignant Stereotypes." 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022.
- Tidjon, Lionel Nganyewou, and Foutse Khomh. "Threat Assessment in Machine Learning based Systems." arXiv preprint arXiv:2207.00091 (2022).
- Naja, Iman, et al. "Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information." IEEE Access 10 (2022): 74383-74411.
- Cinà, Antonio Emanuele, et al. "Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning." arXiv preprint arXiv:2205.01992 (2022).
- Schröder, Tim, and Michael Schulz. "Monitoring machine learning models: A categorization of challenges and methods." Data Science and Management (2022).
- Corea, Francesco, et al. "A principle-based approach to AI: the case for European Union and Italy." AI & SOCIETY (2022): 1-15.
- Carmichael, Zachariah, and Walter J. Scheirer. "Unfooling Perturbation-Based Post Hoc Explainers." arXiv preprint arXiv:2205.14772 (2022).
- Wei, Mengyi, and Zhixuan Zhou. "AI Ethics Issues in Real World: Evidence from AI Incident Database." arXiv preprint arXiv:2206.07635 (2022).
- Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions." IEEE Access (2022).
- Karunagaran, Surya, Ana Lucic, and Christine Custis. "XAI Toolsheet: Towards A Documentation Framework for XAI Tools."
- Paudel, Shreyasha, and Aatiz Ghimire. "AI Ethics Survey in Nepal."
- Ferguson, Ryan. "Transform Your Risk Processes Using Neural Networks."
- Fujitsu Corporation. "AI Ethics Impact Assessment Casebook," 2022
- Shneiderman, Ben and Du, Mengnan. "Human-Centered AI: Tools" 2022
- Salih, Salih. "Understanding Machine Learning Interpretability." Medium. 2022
- Garner, Carrie. "Creating Transformative and Trustworthy AI Systems Requires a Community Effort." Software Engineering Institute. 2022
- Weissinger, Laurin, AI, Complexity, and Regulation (February 14, 2022). The Oxford Handbook of AI Governance
2021
- Arnold, Z., Toner, H., CSET Policy. "AI Accidents: An Emerging Threat." (2021).
- Aliman, Nadisha-Marie, Leon Kester, and Roman Yampolskiy. "Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions." Philosophies 6.1 (2021): 6.
- Falco, Gregory, and Leilani H. Gilpin. "A stress testing framework for autonomous system verification and validation (v&v)." 2021 IEEE International Conference on Autonomous Systems (ICAS). IEEE, 2021.
- Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Technical Challenges and Solutions." arXiv preprint arXiv:2107.09546 (2021).
- John-Mathews, Jean-Marie. AI ethics in practice, challenges and limitations. Diss. Université Paris-Saclay, 2021.
- Macrae, Carl. "Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety and Sociotechnical Sources of Risk." Safety and Sociotechnical Sources of Risk (June 4, 2021) (2021).
- Hong, Matthew K., et al. "Planning for Natural Language Failures with the AI Playbook." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.
- Ruohonen, Jukka. "A Review of Product Safety Regulations in the European Union." arXiv preprint arXiv:2102.03679 (2021).
- Kalin, Josh, David Noever, and Matthew Ciolino. "A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models." arXiv preprint arXiv:2103.02718 (2021).
- Aliman, Nadisha Marie, and Leon Kester. "Epistemic defenses against scientific and empirical adversarial AI attacks." CEUR Workshop Proceedings. Vol. 2916. CEUR WS, 2021.
- John-Mathews, Jean-Marie. L’Éthique de l’Intelligence Artificielle en Pratique. Enjeux et Limites. Diss. université Paris-Saclay, 2021.
- Smith, Catherine. "Automating intellectual freedom: Artificial intelligence, bias, and the information landscape." IFLA Journal (2021): 03400352211057145
If you have a scholarly work that should be added here, please contact us.