Related Work
While formal AI incident research is relatively new, a number of people have been collecting what could be considered incidents. These include,
- Awesome Machine Learning Interpretability: AI Incident Tracker
- AI and Algorithimic Incidents and Controversies of Charlie Pownall
- Map of Helpful and Harmful AI
If you have an incident resource that could be added here, please contact us.
The following publications have been indexed by Google scholar as referencing the database itself, rather than solely individual incidents. Please contact us if your reference is missing.
Responsible AI Collaborative Research
Where needed to serve the broader safety and fairness communities, the Collab produces and sponsors research. Works to date include the following.
- The original research publication released at the public announcement of the AI Incident Database. All citations of this work will be added to this page.
McGregor, Sean. "Preventing repeated real world AI failures by cataloging incidents: The AI incident database." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 17. 2021. - A major update to the incident definitions and criteria as presented at the 2022 NeurIPS Workshop on Human-Centered AI.
McGregor, Sean, Kevin Paeth, and Khoa Lam. "Indexing AI Risks with Incidents, Issues, and Variants." arXiv preprint arXiv:2211.10384 (2022). - Our approach to reducing the uncertainty of incident causes when analyzing open source incident reports. Presented at SafeAI.
Pittaras, Nikiforos, and Sean McGregor. "A taxonomic system for failure cause analysis of open source AI incidents." arXiv preprint arXiv:2211.07280 (2022). - Important lessons learned from editing AI incidents, focusing on issues related to their temporal ambiguity, multiplicity, large-scale exposure harms, and inherent uncertainty in reporting. Submitted to the 2025 Conference on Innovative Applications of Artificial Intelligence (IAAI-25).
Paeth, Kevin, Daniel Atherton, Nikiforos Pittaras, Heather Frase, Sean McGregor. "Lessons for Editors of AI Incidents from the AI Incident Database." arXiv preprint arXiv:2409.16425 (2024)
2023
Citations in peer-reviewed journal articles, book chapters, and preprints
- Ali, S. A., Khan, R., & Ali, S. N. (2023). The Promises and Perils of Artificial Intelligence: An Ethical and Social Analysis. In S. Chakraborty (Ed.), Investigating the Impact of AI on Ethics and Spirituality (pp. 1-24). IGI Global.
- Apruzzese, G., Anderson, H. S., Dambra, S., Freeman, D., Pierazzi, F., & Roundy, K. (2023). "Real attackers don't compute gradients": Bridging the gap between adversarial ML research and practice. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 1-10). IEEE.
- Bach, T. A., Kristiansen, J. K., Babic, A., & Jacovi, A. (2023). Unpacking human-AI interaction in safety-critical industries: A systematic literature review. arXiv.
- Baeza-Yates, R. (2023). An introduction to responsible AI. European Review, 31(4), 406-421.
- Batool, A., Zowghi, D., & Bano, M. (2023). Responsible AI governance: A systematic literature review. arXiv.
- Bommasani, R., Klyman, K., Longpre, S., Kapoor, S., Maslej, N., Xiong, B., Zhang, D., & Liang, P. (2023). The Foundation Model Transparency Index. arXiv.
- Bondi-Kelly, E., Hartvigsen, T., Sanneman, L. M., Sankaranarayanan, S., Harned, Z., Wickerson, G., Gichoya, J. W., Oakden-Rayner, L., Celi, L. A., Lungren, M. P., Shah, J. A., & Ghassemi, M. (2023). Taking off with AI: Lessons from aviation for healthcare. In EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (Article No. 4, pp. 1-14). ACM.
- Chatterjee, R. (2023). The scope of roboethics in business ethics. 3D... IBA Journal of Management & Leadership, 14(2), 22-27.
- Chen, P.-Y., & Liu, S. (2024). Holistic adversarial robustness of deep learning models. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15411-15420.
- Di Mascio, T., Caruso, F., & Peretti, S. (2023). How to make an artificial intelligence algorithm "ecological"? Insights from a holistic perspective. In CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter (Article No. 21, pp. 1-7). ACM.
- Faivre, J. (2023). The AI Act: Towards global effects? SSRN.
- Feffer, M., Martelaro, N., & Heidari, H. (2023). The AI Incident Database as an educational tool to raise awareness of AI harms: A classroom exploration of efficacy, limitations, & future improvements. EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (Article No. 3, pp. 1-11).
- Greser, J. (2023). Kilka uwag o cyberbezpieczeństwie medycznej AI. In A. Szczęsna & M. Stachoń (Eds.), Cyberbezpieczeństwo AI. AI w cyberbezpieczeństwie (pp. 73-81). NASK - Państwowy Instytut Badawczy. ISBN 978-83-65448-55-2.
- Groza, A., & Marginean, A. (2023). Brave new world: AI in teaching and learning. In ICERI2023 Proceedings (pp. 8706-8713). Technical University of Cluj-Napoca.
- Groza, A., & Marginean, A. (2023). Brave new world: Artificial intelligence in teaching and learning. arXiv.
- Hadshar, R. (2023). A review of the evidence for existential risk from AI via misaligned power-seeking. arXiv.
- Hong, Y., Lian, J., Xu, L., Min, J., Wang, Y., & Freeman, L. J. (2023). Statistical perspectives on reliability of artificial intelligence systems. Quality Engineering, 35(1), 56-78.
- Huang, R., Holzapfel, A., Sturm, B., & Kaila, A.-K. (2023). Beyond diverse datasets: Responsible MIR, interdisciplinarity, and the fractured worlds of music. Transactions of the International Society for Music Information Retrieval, 6(1), 43-59.
- Inoue, S., Nguyen, M.-T., Mizokuchi, H., Nguyen, T.-A. D., Nguyen, H.-H., & Le, D. T. (2023). Towards safer operations: An expert-involved dataset of high-pressure gas incidents for preventing future failures. arXiv.
- Kanade, A., Bhoite, S., Kanade, S., & Jain, N. (2023). Artificial Intelligence and Morality: A Social Responsibility. Journal of Intelligence Studies in Business, 13(1).
- Kilhoffer, Z., Nlkolich, A., Sanfilippo, M. R., & Zhou, Z. (2023). AI accountability policy. School of Information Sciences, University of Illinois at Urbana-Champaign.
- Larsonneur, C. (2023). L'algorithme sert-il les traducteurs ? Conditions et contexte de travail avec les outils de traduction neuronale. In O. Guillon & S. Pickford (Eds.), Approches socio-économiques de la traduction littéraire (Vol. 35, Issue 2, pp. 90-103). Parallèles.
- Lupo, G. (2023). Risky artificial intelligence: The role of incidents in the path to AI regulation. Law, Technology and Humans, 5(1), 133-152. Faculty of Law, Queensland University of Technology.
- Marres, N., & Sormani, P. (2023). Testing 'AI': Do we have a situation? A conversation. Universität Siegen.
- McConvey, K., Guha, S., & Kuzminykh, A. (2023). A human-centered review of algorithms in decision-making in higher education. In CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Article No. 223, pp. 1-15). ACM.
- McGregor, S. (2023). A scaled multiyear responsible artificial intelligence impact assessment. Computer, 56(8), 20-27.
- McGregor, S., & Hostetler, J. (2023). Data-centric governance. arXiv.
- Morgan, P. (2023). Tort liability and autonomous systems accidents: Challenges and future developments. In P. Morgan (Ed.), Tort liability and autonomous systems accidents (pp. 1-26). Edward Elgar Publishing.
- Pan, C., Gao, Y., & Gu, A. (2023). Modeling operational profile for AI systems: A case study on UAV systems. In 2023 4th International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI) (pp. 1-8). IEEE.
- Pletcher, S. N. (2023, September 1). Starting Slowly to Go Fast Deep Dive in the Context of AI Pilot Projects.
- Pletcher, S. (2023). Visual privacy: Current and emerging regulations around unconsented video analytics in retail. arXiv.
- Rodrigues, R., Resseguier, A., & Santiago, N. (2023). When artificial intelligence fails: The emerging role of incident databases. Public Governance, Administration and Finances Law Review, 8(2), 17-28.
- Rousi, R., Samani, H., Mäkitalo, N., Vakkuri, V., Linkola, S., Kemell, K.-K., Daubaris, P., Fronza, I., Mikkonen, T., & Abrahamsson, P. (2024). Business and ethical concerns in domestic conversational generative AI-empowered multi-robot systems. In S. Hyrynsalmi, J. Münch, K. Smolander, & J. Melegati (Eds.), Software Business: 14th International Conference, ICSOB 2023, Lahti, Finland, November 27--29, 2023, Proceedings (pp. 173-189). Springer.
- Schloetzer, J. D., & Yoshinaga, K. (2023). Algorithmic hiring systems: Implications and recommendations for organisations and policymakers. In YSEC Yearbook of Socio-Economic Constitutions 2023: Law and the governance of artificial intelligence (pp. 213-246). Springer.
- Schloetzer, J. D., & Yoshinaga, K. (2023). Algorithmic hiring systems: Implications and recommendations for organisations and policymakers. Law and the Governance of Artificial Intelligence, Yearbook of Socio-Economic Constitutions. Springer, Cham.
- Shaffer Shane, T. (2023). AI incidents and 'networked trouble': The case for a research agenda. Big Data & Society, 10(2).
- Shoker, S., Reddie, A., Barrington, S., Booth, R., Brundage, M., Chahal, H., Depp, M., Drexel, B., Gupta, R., Favaro, M., Hecla, J., Hickey, A., Konaev, M., Kumar, K., Lambert, N., Lohn, A., O'Keefe, C., Rajani, N., Sellitto, M., Trager, R., Walker, L., Wehsener, A., & Young, J. (2023). Confidence-building measures for artificial intelligence: Workshop proceedings. arXiv.
- Silicki, K. (2023). Cyberbezpieczeństwo systemów wykorzystujących sztuczną inteligencję w świetle raportów ENISA. In A. Szczęsna & M. Stachoń (Eds.), Cyberbezpieczeństwo AI. AI w cyberbezpieczeństwie (pp. 10-21). NASK - Państwowy Instytut Badawczy.
- Sood, S., & Kim, A. (2023). The golden age of the big data audit: Agile practices and innovations for e-commerce, post-quantum cryptography, psychosocial hazards, artificial intelligence algorithm audits, and deepfakes. International Journal of Innovation and Economic Development, 9(2), 7-23.
- Stoica, A.-A., & Pica, Ș. (2023). Drones and the ethical politics of public monitoring. Challenges of the Knowledge Society. Public Law, 337-345.
- Turri, V., & Dzombak, R. (2023). Why we need to know more: Exploring the state of AI incident documentation practices. AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 576-583.
- Velichkovska, B., Denkovski, D., Gjoreski, H., Kalendar, M., & Osmani, V. (2023). A Survey of Bias in Healthcare: Pitfalls of Using Biased Datasets and Applications. In Artificial Intelligence Application in Networks and Systems (CSOC 2023) (pp. 570-584). Lecture Notes in Networks and Systems, volume 724. Springer.
- Watson, E., Viana, T., & Zhang, S. (2023). Augmented behavioral annotation tools, with application to multimodal datasets and models: A systematic review. AI, 4(1), 128-171.
- Winter, C., Hollman, N., & Manheim, D. (2023). Value Alignment for Advanced Artificial Judicial Intelligence. American Philosophical Quarterly, 60(2), 187-203.
- Wright, L. S. (2023). Artificial intelligence: Why we need it and why we need to be cautious. In M. Lovell, O. S. Moghraby, & R. Waller (Eds.), Digital Mental Health: From Theory to Practice (pp. 60-71). Cambridge University Press.
- Wu, W., & Liu, S. (2023). A comprehensive review and systematic analysis of artificial intelligence regulation policies. arXiv.
- Xia, B., Lu, Q., Perera, H., Zhu, L., Xing, Z., Liu, Y., & Whittle, J. (2023). Towards concrete and connected AI risk assessment (C2AIRA): A systematic mapping study. In 2023 IEEE/ACM 2nd International Conference on AI Engineering -- Software Engineering for AI (CAIN) (pp. 27-34). IEEE.
- Xia, B., Lu, Q., Perera, H., Zhu, L., Xing, Z., Liu, Y., & Whittle, J. (2023). Towards concrete and connected AI risk assessment (C2AIRA): A systematic mapping study. arXiv.
- Xu, W. (2023). User-centered design (IX): A "user experience 3.0" paradigm framework in the intelligence era. arXiv.
- Xu, W., & Dainoff, M. (2023). Enabling human-centered AI: A new junction and shared journey between AI and HCI communities. Interactions, 30(1), 42-47.
- Xu, W., & Dainoff, M. (2023). Enabling human-centered AI: A new junction and shared journey between AI and HCI communities. arXiv.
- Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2023). Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. International Journal of Human--Computer Interaction, 39(3), 494-518.
- Zhan, X., Sun, H., & Miranda, S. M. (2023). How does AI fail us? A typological theorization of AI failures. In ICIS 2023 Proceedings: AI in Business and Society.
- Zhou, L., Moreno-Casares, P. A., Martínez-Plumed, F., Burden, J., Burnell, R., Cheke, L., Ferri, C., Marcoci, A., Mehrbakhsh, B., Moros-Daval, Y., Ó hÉigeartaigh, S., Rutar, D., Schellaert, W., Voudouris, K., & Hernández-Orallo, J. (2023). Predictable artificial intelligence. arXiv.
- Zhu, Y. (Zhu Yu 朱禹), Chen, G. (Chen Guanze 陈关泽), Lu, Y. (Lu Yongrong 陆泳溶), & Fan, W. (Fan Wei 樊伟). (2023). Generative Artificial Intelligence Governance Action Framework: Content Analysis Based on AIGC Incident Report Texts. 图书情报知识 (Library and Information Knowledge), 40(4), 41-51.
- Žunić, L., Đukanović, G., & Popović, G. (2023). Rizici vještačke inteligencije: Analiza i implikacije. In 15th International Conference "Information Technology and Application" (ITeO 2023) (Vol. 15, pp. 29-40). Banja Luka, Bosnia and Herzegovina.
Citations in briefs, theses, white papers, and mixed genres
- Acion, L., Rajngewerc, M., Randall, G., & Etcheverry, L. (2023). Generative AI poses ethical challenges for open science. Nature Human Behaviour, 7(1800--1801).
- Antunović, J. (2023). Sigurnost komunikacije u kritičnoj infrastrukturi [Undergraduate thesis, Sveučilište u Zagrebu, Fakultet prometnih znanosti]. Repozitorij Fakulteta prometnih znanosti.
- Agnew, W. (2023). AI ethics and critique for robotics (Publication No. 30636400) [Doctoral dissertation, University of Washington]. ProQuest Dissertations & Theses Global.
- Attard-Frost, B., & Widder, D. G. (2023). The Ethics of AI Value Chains. arXiv.
- Bogusz, I. C., & Johnson, D. (2023). AI for the benefit of society: Progress with trust and transparency. Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Informatics and Media. Stockholm: Fores.
- D'Albergo, E., Fasciani, T., & Giovanelli, G. (2023, January 19-21). Social Powers and Governance of Artificial Intelligence in Urban Security Policies: Video Surveillance in Turin. Paper presented at Re-assembling the social. Re(i)stituting the social. 40 years of AIS, Naples, Italy.
- Desouza, K. C., & Dawson, G. S. (2023). Pathways to trusted progress with artificial intelligence. IBM Center for The Business of Government.
- Duarte, A. B. F. (2023). Enhancing portuguese public services: Prototype of a mobile application with a digital assistant [Trabalho de projeto de mestrado, Escola Superior de Comunicação Social]. Instituto Politécnico de Lisboa, Escola Superior de Comunicação Social.
- Giannini, A. (2023). Criminal behavior and accountability of artificial intelligence systems (Doctoral dissertation, University of Florence and Maastricht University).
- Hoffmann, M., & Frase, H. (2023). Adding structure to AI harm: An introduction to CSET's AI harm framework. Center for Security and Emerging Technology.
- Isbell, C., Littman, M. L., & Norvig, P. (2023). Viewpoint: Software Engineering of Machine Learning Systems. Communications of the ACM, 66(2), 35-37.
- Knight, S., Heggart, K., Dickson-Deane, C., Ford, H., Hunter, J., Johns, A., Kitto, K., Cetindamar Kozanoglu, D., Maher, D., & Narayan, B. (2023). Submission in response to the House Standing Committee on Employment, Education and Training's inquiry into the use of generative artificial intelligence in the Australian education system. House Standing Committee on Employment, Education and Training's inquiry into the use of generative artificial intelligence in the Australian education system.
- Kutz, J., Göbels, V. P., Brajovic, D., Fresz, B., Renner, N., Omri, S., Neuhüttler, J., Huber, M., & Bienzeisler, B. (2023). KI-Zertifizierung und Absicherung im Kontext des EU AI Act: Herausforderungen und Bedürfnisse aus Sicht von Unternehmen. Fraunhofer IAO.
- Longstaff, T. (2023). SEI Thoughts on AI T and E and Related Topics. (Technical Report). Carnegie-Mellon University, Pittsburgh, PA. Air Force Life Cycle Management Center, Hanscom AFB, MA. Retrieved from Accession Number: AD1199686.
- Massei, G. (2023). Algorithmic Trading: An Overview and Evaluation of Its Impact on Financial Markets [Master's thesis, Università Ca' Foscari Venezia].
- Musser, M., Lohn, A., Dempsey, J. X., Spring, J., Kumar, R. S. S., Leong, B., Liaghati, C., Martinez, C., Grant, C. D., Rohrer, D., Frase, H., Bansemer, J., Rodriguez, M., Regan, M., Chowdhury, R., & Hermanek, S. (2023). Adversarial machine learning and cybersecurity: Risks, challenges, and legal implications. Center for Security and Emerging Technology.
- Narayanan, M., Seymour, A., Frase, H., & Elmgren, K. (2023). Repurposing the wheel: Lessons for AI standards (Workshop Report). Center for Security and Emerging Technology.
- NIST. Risk Management Playbook. 2023
- Sharma, A. (2023). Testing of machine learning algorithms and models (PhD dissertation). Universität Oldenburg.
- Shneiderman, B. (2023). ACM TechBrief: Safer algorithmic systems (Issue 6). Association for Computing Machinery.
- Sivakumaran, A. (2023). Investigating consumer perception and speculative AI labels for creative AI usage in media (Master's thesis, KTH, School of Electrical Engineering and Computer Science).
- Toner, H., Ji, J., Bansemer, J., Lim, L., Painter, C., Corley, C., Whittlestone, J., Botvinick, M., Rodriguez, M., & Kumar, R. S. S. (2023). Skating to where the puck is going: Anticipating and managing risks from frontier AI systems. Center for Security and Emerging Technology.
- Wang, L. (2023). An urgency for inclusivity: Redesigning datasets for improved representation of LGBTQ+ identity terms in artificial intelligence (A.I.). HSS4 - The Modern Context: Queer Theory and Politics, Professor Barnick.
- Zhang, J. (2023). Evaluating Artificial Neural Network Robustness for Safety-Critical Systems [Ph.D. dissertation, Technical University of Denmark]. Kgs. Lyngby: Technical University of Denmark.
2022
- Macrae, Carl. "Learning from the failure of autonomous and intelligent systems: Accidents, safety, and sociotechnical sources of risk." Risk analysis 42.9 (2022): 1999-2025.
- Felländer, Anna, et al. "Achieving a Data-driven Risk Assessment Methodology for Ethical AI." Digital Society 1.2 (2022): 13.
- Apruzzese, Giovanni, et al. "" Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice." arXiv preprint arXiv:2212.14315 (2022).
- Petersen, Eike, et al. "Responsible and regulatory conform machine learning for medicine: A survey of challenges and solutions." IEEE Access 10 (2022): 58375-58418.
- Schuett, Jonas. "Three lines of defense against risks from AI." arXiv preprint arXiv:2212.08364 (2022).
- Schiff, Daniel S. "Looking through a policy window with tinted glasses: Setting the agenda for US AI policy." Review of Policy Research.
- Neretin, Oleksii, and Vyacheslav Kharchenko. "Model for Describing Processes of AI Systems Vulnerabilities Collection and Analysis using Big Data Tools." 2022 12th International Conference on Dependable Systems, Services and Technologies (DESSERT). IEEE, 2022.
- Durso, Francis, et al. "Analyzing Failures in Artificial Intelligent Learning Systems (FAILS)." 2022 IEEE 29th Annual Software Technology Conference (STC). IEEE, 2022.
- Kassab, Mohamad, Joanna DeFranco, and Phillip Laplante. "Investigating Bugs in AI-Infused Systems: Analysis and Proposed Taxonomy." 2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). IEEE, 2022.
- Braga, Juliao, et al. "Projeto para o desenvolvimento de um artigo sobre governança de algoritmos e dados." (2022).
- Secchi, Carlo, and Alessandro Gili. "Digitalisation for sustainable infrastructure: the road ahead." Digitalisation for sustainable infrastructure (2022): 1-326.
- Groza, Adrian, et al. "Elaborarea cadrului strategic nat, ional în domeniul inteligent, ei artificiale."
- Braga, Juliao, et al. "Project for the Development of a Paper on Algorithm and Data Governance." (2022). (Original Portuguese).
- NIST. Risk Management Playbook. 2022
- Shneiderman, Ben. Human-Centered AI. Oxford University Press, 2022.
- Schwartz, Reva, et al. "Towards a Standard for Identifying and Managing Bias in Artificial Intelligence." (2022).
- McGrath, Quintin et al. An Enterprise Risk Management Framework to Design Pro-Ethical AI Solutions." University of South Florida. (2022).
- Nor, Ahmad Kamal Mohd, et al. "Abnormality Detection and Failure Prediction Using Explainable Bayesian Deep Learning: Methodology and Case Study of Real-World Gas Turbine Anomalies." (2022).
- Xie, Xuan, Kristian Kersting, and Daniel Neider. "Neuro-Symbolic Verification of Deep Neural Networks." arXiv preprint arXiv:2203.00938 (2022).
- Hundt, Andrew, et al. "Robots Enact Malignant Stereotypes." 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022.
- Tidjon, Lionel Nganyewou, and Foutse Khomh. "Threat Assessment in Machine Learning based Systems." arXiv preprint arXiv:2207.00091 (2022).
- Naja, Iman, et al. "Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information." IEEE Access 10 (2022): 74383-74411.
- Cinà, Antonio Emanuele, et al. "Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning." arXiv preprint arXiv:2205.01992 (2022).
- Schröder, Tim, and Michael Schulz. "Monitoring machine learning models: A categorization of challenges and methods." Data Science and Management (2022).
- Corea, Francesco, et al. "A principle-based approach to AI: the case for European Union and Italy." AI & SOCIETY (2022): 1-15.
- Carmichael, Zachariah, and Walter J. Scheirer. "Unfooling Perturbation-Based Post Hoc Explainers." arXiv preprint arXiv:2205.14772 (2022).
- Wei, Mengyi, and Zhixuan Zhou. "AI Ethics Issues in Real World: Evidence from AI Incident Database." arXiv preprint arXiv:2206.07635 (2022).
- Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions." IEEE Access (2022).
- Karunagaran, Surya, Ana Lucic, and Christine Custis. "XAI Toolsheet: Towards A Documentation Framework for XAI Tools."
- Paudel, Shreyasha, and Aatiz Ghimire. "AI Ethics Survey in Nepal."
- Ferguson, Ryan. "Transform Your Risk Processes Using Neural Networks."
- Fujitsu Corporation. "AI Ethics Impact Assessment Casebook," 2022
- Shneiderman, Ben and Du, Mengnan. "Human-Centered AI: Tools" 2022
- Salih, Salih. "Understanding Machine Learning Interpretability." Medium. 2022
- Garner, Carrie. "Creating Transformative and Trustworthy AI Systems Requires a Community Effort." Software Engineering Institute. 2022
- Weissinger, Laurin, AI, Complexity, and Regulation (February 14, 2022). The Oxford Handbook of AI Governance
2021
- Arnold, Z., Toner, H., CSET Policy. "AI Accidents: An Emerging Threat." (2021).
- Aliman, Nadisha-Marie, Leon Kester, and Roman Yampolskiy. "Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions." Philosophies 6.1 (2021): 6.
- Falco, Gregory, and Leilani H. Gilpin. "A stress testing framework for autonomous system verification and validation (v&v)." 2021 IEEE International Conference on Autonomous Systems (ICAS). IEEE, 2021.
- Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Technical Challenges and Solutions." arXiv preprint arXiv:2107.09546 (2021).
- John-Mathews, Jean-Marie. AI ethics in practice, challenges and limitations. Diss. Université Paris-Saclay, 2021.
- Macrae, Carl. "Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety and Sociotechnical Sources of Risk." Safety and Sociotechnical Sources of Risk (June 4, 2021) (2021).
- Hong, Matthew K., et al. "Planning for Natural Language Failures with the AI Playbook." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.
- Ruohonen, Jukka. "A Review of Product Safety Regulations in the European Union." arXiv preprint arXiv:2102.03679 (2021).
- Kalin, Josh, David Noever, and Matthew Ciolino. "A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models." arXiv preprint arXiv:2103.02718 (2021).
- Aliman, Nadisha Marie, and Leon Kester. "Epistemic defenses against scientific and empirical adversarial AI attacks." CEUR Workshop Proceedings. Vol. 2916. CEUR WS, 2021.
- John-Mathews, Jean-Marie. L’Éthique de l’Intelligence Artificielle en Pratique. Enjeux et Limites. Diss. université Paris-Saclay, 2021.
- Smith, Catherine. "Automating intellectual freedom: Artificial intelligence, bias, and the information landscape." IFLA Journal (2021): 03400352211057145
If you have a scholarly work that should be added here, please contact us.