Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Related Work

While formal AI incident research is relatively new, a number of people have been collecting what could be considered incidents. These include,

  • Awesome Machine Learning Interpretability: AI Incident Tracker
  • AI and Algorithimic Incidents and Controversies of Charlie Pownall
  • Map of Helpful and Harmful AI

If you have an incident resource that could be added here, please contact us.

The following publications have been indexed by Google scholar as referencing the database itself, rather than solely individual incidents. Please contact us if your reference is missing.

Responsible AI Collaborative Research

Where needed to serve the broader safety and fairness communities, the Collab produces and sponsors research. Works to date include the following.

  • The original research publication released at the public announcement of the AI Incident Database. All citations of this work will be added to this page.
    McGregor, Sean. "Preventing repeated real world AI failures by cataloging incidents: The AI incident database." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 17. 2021.
  • A major update to the incident definitions and criteria as presented at the 2022 NeurIPS Workshop on Human-Centered AI.
    McGregor, Sean, Kevin Paeth, and Khoa Lam. "Indexing AI Risks with Incidents, Issues, and Variants." arXiv preprint arXiv:2211.10384 (2022).
  • Our approach to reducing the uncertainty of incident causes when analyzing open source incident reports. Presented at SafeAI.
    Pittaras, Nikiforos, and Sean McGregor. "A taxonomic system for failure cause analysis of open source AI incidents." arXiv preprint arXiv:2211.07280 (2022).
  • Important lessons learned from editing AI incidents, focusing on issues related to their temporal ambiguity, multiplicity, large-scale exposure harms, and inherent uncertainty in reporting. Submitted to the 2025 Conference on Innovative Applications of Artificial Intelligence (IAAI-25).
    Paeth, Kevin, Daniel Atherton, Nikiforos Pittaras, Heather Frase, Sean McGregor. "Lessons for Editors of AI Incidents from the AI Incident Database." arXiv preprint arXiv:2409.16425 (2024)

2024

  • Abercrombie, Gavin, Djalel Benbouzid, Paolo Giudici, Delaram Golpayegani, Julio Hernandez, Pierre Noro, Harshvardhan Pandit, Eva Paraschou, Charlie Pownall, Jyoti Prajapati, Mark A. Sayre, Ushnish Sengupta, Arthit Suriyawongkul, Ruby Thelot, Sofia Vei, and Laura Waltersdorfer. "A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms." arXiv, last revised November 9, 2024.
  • Abu Zaid, Faried, Daniel Neider, and Mustafa Yalçıner. "VeriFlow: Modeling Distributions for Neural Network Verification." arXiv, June 20, 2024.
  • Agarwal, Avinash, and Manisha J. Nene. "Addressing AI Risks in Critical Infrastructure: Formalising the AI Incident Reporting Process." Paper presented at the 2024 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, July 12--14, 2024. Published September 20, 2024.
  • Agarwal, Avinash, and Manisha J. Nene. "Standardised Schema and Taxonomy for AI Incident Databases in Critical Digital Infrastructure." Paper presented at the 2024 IEEE Pune Section International Conference (PuneCon), Pune, India, December 13--15, 2024. Published February 27, 2025.
  • Allaham, Mowafak, and Nicholas Diakopoulos. "Evaluating the Capabilities of LLMs for Supporting Anticipatory Impact Assessment." arXiv, May 20, 2024.
  • All Party Parliamentary Group for Fair Elections. Free But Not Fair: British Elections and How to Restore Trust in Politics. November 25, 2024.
  • Anandayuvaraj, Dharun, Matthew Campbell, Arav Tewari, and James C. Davis. "FAIL: Analyzing Software Failures from the News Using LLMs." In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering (ASE '24), 506--18. New York: Association for Computing Machinery, 2024.
  • Bach, Tita A., Jenny K. Kristiansen, Aleksandar Babic, and Alon Jacovi. "Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review." IEEE Access 12 (August 1, 2024): 106385--106414.
  • Baeza-Yates, Ricardo, and Usama M. Fayyad. "Responsible AI: An Urgent Mandate." IEEE Intelligent Systems 39, no. 1 (January-February 2024): 12--17.
  • Batool, Amna, Didar Zowghi, and Muneera Bano. "AI Governance: A Systematic Literature Review." Research Square, July 24, 2024.
  • Bender, Emily M., and Alvin Grissom II. "Power Shift: Toward Inclusive Natural Language Processing." In Inclusion in Linguistics, edited by Anne H. Charity Hudley, Christine Mallinson, and Mary Bucholtz, 199--221. Oxford: Oxford University Press, 2024.
  • Bérastégui, Pierre. Artificial Intelligence in Industry 4.0: Implications for Occupational Safety and Health. Report 2024.01. Brussels: European Trade Union Institute (ETUI), January 2024.
  • Biecek, Przemyslaw, and Wojciech Samek. "Position: Explain to Question Not to Justify." arXiv, June 28, 2024.
  • Bieringer, Lukas, Kevin Paeth, Jochen Stängler, Andreas Wespi, Alexandre Alahi, and Kathrin Grosse. Position: A Taxonomy for Reporting and Describing AI Security Incidents. First submitted December 19, 2024. Revised February 26, 2025. arXiv preprint arXiv:2412.14855 [cs.CR].
  • Bikkasani, Dileesh Chandra. "Navigating Artificial General Intelligence (AGI): Societal Implications, Ethical Considerations, and Governance Strategies." AI and Ethics, published December 17, 2024.
  • Birkstedt, Teemu. Governing Artificial Intelligence: From Ethical Principles Toward Organizational AI Governance Practices. Doctoral diss., University of Turku, Turku School of Economics, 2024. Annales Universitatis Turkuensis, Ser. E, Oeconomica, Tom. 124.
  • Blösser, Myrthe, and Andrea Weihrauch. "A Consumer Perspective of AI Certification: The Current Certification Landscape, Consumer Approval, and Directions for Future Research." European Journal of Marketing 58, no. 2 (February 8, 2024).
  • Bogucka, Edyta, Marios Constantinides, Julia De Miguel Velazquez, Sanja Šćepanović, Daniele Quercia, and Andrés Gvirtz. "The Atlas of AI Incidents in Mobile Computing: Visualizing the Risks and Benefits of AI Gone Mobile." arXiv, July 22, 2024.
  • Bogucka, Edyta, Marios Constantinides, Sanja Šćepanović, and Daniele Quercia. "AI Design: A Responsible AI Framework for Impact Assessment Reports." IEEE Internet Computing (Early Access), September 2, 2024.
  • Bogucka, Edyta, Sanja Šćepanović, and Daniele Quercia. "Atlas of AI Risks: Enhancing Public Understanding of AI Risks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 12, no. 1 (October 14, 2024): 33--43.
  • Bolboli Qadikolaei, Somayeh, and Hamid Parsania. 2024. "The Concept of Human-Centricity in Sociological Studies of Artificial Intelligence." Quarterly of Social Studies and Research in Iran 13, no. 3 (September): 425--449.
  • Bommasani, Rishi, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Nestor Maslej, Betty Xiong, Daniel Zhang, and Percy Liang. "The 2023 Foundation Model Transparency Index." Transactions on Machine Learning Research, February 2025.
  • Brandt, Aniek. 2024. Evaluating the Epistemic Condition of Responsibility for AI. Master's thesis, Utrecht University.
  • Bylykbashi, Anxhela, and Lana Gavranović. Mitigating Non-Consumer AI Malfunctions: Response Strategies of Retail Organizations. Master's thesis, Jönköping International Business School, Jönköping University, 2024.
  • Byrd, Don. "A+AI: Threats to Society, Remedies, and Governance." arXiv, September 3, 2024.
  • Cao, Hongpeng, Yanbing Mao, Lui Sha, and Marco Caccamo. Physics-model-guided Worst-case Sampling for Safe Reinforcement Learning. arXiv preprint arXiv:2412.13224, submitted December 17, 2024.
  • Cao, Hongpeng, Yanbing Mao, Yihao Cai, Lui Sha, and Marco Caccamo. Runtime Learning Machine. Preprint submitted to International Conference on Learning Representations (ICLR 2025), September 17, 2024. Last modified February 5, 2025.
  • Cao, Hongpeng, Yanbing Mao, Yihao Cai, Lui Sha, and Marco Caccamo. "Simplex-Enabled Safe Continual Learning Machine." arXiv (preprint), last revised October 6, 2024.
  • Cattell, Sven, Avijit Ghosh, and Lucie-Aimée Kaffee. "Coordinated Flaw Disclosure for AI: Beyond Security Vulnerabilities." arXiv, July 26, 2024.
  • Chakraborti, Mahasweta, Bert Joseph Prestoza, Nicholas Vincent, and Seth Frey. Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure. arXiv, September 27, 2024.
  • Chen, Chuan, Yu Feng, Mengyi Wei, Peng Luo, Shengkai Wang, and Liqiu Meng. "A Hyper-Knowledge Graph System for Research on AI Ethics Cases." Heliyon 10, no. 7 (April 15, 2024): e29048.
  • Chen, Hao, Bhiksha Raj, Xing Xie, and Jindong Wang. "On Catastrophic Inheritance of Large Foundation Models." arXiv, February 2, 2024.
  • Cheong, Ben Chester. "Transparency and Accountability in AI Systems: Safeguarding Well-Being in the Age of Algorithmic Decision-Making." Frontiers in Human Dynamics 6 (July 2, 2024).
  • Chmielinski, Kasia, Sarah Newman, Chris N. Kranzinger, Michael Hind, Jennifer Wortman Vaughan, Margaret Mitchell, Julia Stoyanovich, Angelina McMillan-Major, Emily McReynolds, Kathleen Esfahany, Mary L. Gray, Audrey Chang, and Maui Hudson. The CLeAR Documentation Framework for AI Transparency: Recommendations for Practitioners & Context for Policymakers. Harvard Kennedy School Shorenstein Center on Media, Politics and Public Policy, May 21, 2024.
  • Cho, Deun-Sol, Jae-Min Cho, and Won-Tae Kim. "A Generative Digital Twin for Continually Enhancing the Intended Functional Safety of Cyber--Physical Systems." IEEE Transactions on Reliability (Early Access), October 8, 2024.
  • Corrêa, Nicholas Kluge. "Dynamic Normativity: Necessary and Sufficient Conditions for Value Alignment." arXiv (preprint), June 18, 2024.
  • Cox, Andrew. "11 Ethics Case Studies of Artificial Intelligence for Library and Information Professionals." In New Horizons in Artificial Intelligence in Libraries, edited by Edmund Balnaves, Leda Bultrini, Andrew Cox, and Raymond Uzwyshyn, 156--168. IFLA Publications 185. Berlin: De Gruyter Saur, 2025.
  • Daniels, Owen J., and Dewey Murdick. Enabling Principles for AI Governance. Washington, DC: Center for Security and Emerging Technology, July 2024.
  • Daugherty, Paul, Jeremy Jurgens, John Granger, and Cathy Li. AI Governance Alliance: Briefing Paper Series. World Economic Forum, January 18, 2024.
  • David, Tom, and Nicolas Miailhe. "Assessing the Safety and Robustness of Advanced AI." Politique étrangère 243, no. 3 (2024): 51--65.
  • De Miguel Velázquez, Julia, Sanja Šćepanović, Andrés Gvirtz, and Daniele Quercia. "Decoding Real-World Artificial Intelligence Incidents." Computer 57, no. 11 (November 2024): 71--81.
  • DeVrio, Alicia, Motahhare Eslami, and Kenneth Holstein. "Building, Shifting, & Employing Power: A Taxonomy of Responses From Below to Algorithmic Harm." In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24), 1093--1106. June 5, 2024.
  • Dixon, Ren Bin Lee, and Heather Frase. An Argument for Hybrid AI Incident Reporting: Lessons Learned from Other Incident Reporting Systems. Issue Brief. Center for Security and Emerging Technology, March 2024.
  • Drage, Eleanor, Kerry McInerney, and Rosi Braidotti, eds. 2024. The Good Robot: Why Technology Needs Feminism. London: Bloomsbury Academic.
  • Duarte, Daniel Edler. "Tecnopolíticas da Falha: Dispositivos de Crítica e Resistência a Novas Ferramentas Punitivas [Technopolitics of Failure: Modes of Critique and Resistance to New Punitive Tools]." Revista Brasileira de Ciências Sociais 39 (2024).
  • Dutu, Andrei. "Uniunea Europeană: Instituirea unui Regim Comun de Răspundere Extracontractuală (Delictuală) în Materie de Prejudiciu Causat de Inteligenţa Artificială [Propunere de directivă a Parlamentului European și a Consiliului privind adaptarea normelor în materie de răspundere civilă extracontractuală la inteligența artificială (Directiva privind răspunderea în materie de IA)]." Pandectele Române, no. 4 (April 2024): 211--218..
  • Expósito Jiménez, Víctor J., Georg Macher, Daniel Watzenig, and Eugen Brenner. "Safety of the Intended Functionality Validation for Automated Driving Systems by Using Perception Performance Insufficiencies Injection." Vehicles 6, no. 3 (July 4, 2024): 1164--1184.
  • Faulhaber, Ella, and Charles Chaffin. "Artificial Intelligence in Accounting, Medicine, and Law with Potential Implications for Financial Planning: A Review of Literature." Financial Services Review 32, no. 4 (2024): 1--11.
  • Gagnon, Paul, Misha Benjamin, Justine Gauthier, Catherine Regis, Jenny Lee, and Alexei Nordell-Markovits. "On the Modification and Revocation of Open Source Licences." arXiv, May 29, 2024.
  • Goldkind, Lauri, Joy Ming, and Alex Fink. "AI in the Nonprofit Human Services: Distinguishing Between Hype, Harm, and Hope." Human Service Organizations: Management, Leadership & Governance, published online December 3, 2024.
  • Golpayegani, Delaram. Semantic Frameworks to Support the EU AI Act's Risk Management and Documentation. PhD diss., Trinity College Dublin, University of Dublin, 2024.
  • González Mendoza, Juan Pablo, Felipe Trujillo-Romero, and Juan José Cárdenas Cornejo. Detección de objetos 3D con PointNet para la conducción autónoma [3D Object Detection with PointNet for Autonomous Driving]. Congreso Estudiantil de Inteligencia Artificial Aplicada a la Ingeniería y Tecnología, UNAM, FESC, Estado de México, 2024.
  • Gray, Douglas, and Evan Shellshear. Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics, Without the Hype. Boca Raton, FL: CRC Press, 2024.
  • Greenberg, Ariel M. "A Schema for Harms-Sensitive Reasoning, and an Approach to Populate Its Ontology by Human Annotation." In Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams, edited by Prithviraj Dasgupta, James Llinas, Tony Gillespie, Scott Fouse, William Lawless, Ranjeev Mittu, and Donald Sofge, 265--278. London: Academic Press, 2024.
  • Gross, Nicole. "A Powerful Potion for a Potent Problem: Transformative Justice for Generative AI in Healthcare." AI and Ethics (July 31, 2024).
  • Grosse, Kathrin, Lukas Bieringer, Tarek R. Besold, Battista Biggio, and Alexandre Alahi. "When Your AI Becomes a Target: AI Security Incidents and Best Practices." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 21 (March 24, 2024): 23041--23046.
  • Henman, Paul W. Fay. 2024. "Just AI: Using Socio-Legal Studies of Fairness to Inform Ethical AI in Government." In Socio-Legal Generation: Essays in Honour of Michael Adler, edited by Sharon Cowan and Simon Halliday, 37--54. Palgrave Socio-Legal Studies. Cham: Palgrave Macmillan.
  • Hollanek, Tomasz, and Indira Ganesh. "Easy Wins and Low Hanging Fruit: Blueprints, Toolkits, and Playbooks to Advance Diversity and Inclusion in AI." In In/Convenience: Inhabiting the Logistical Surround, edited by Joshua Neves and Marc Steinberg, 162--175. Amsterdam: Institute of Network Cultures, 2024.
  • Hossain, Mahmood, Hamad Khalid, Avent Prakasa Rao, Mohammad Lootah, Salah Salim Khalaf Al-Mohammedi, and Salih Rashid Majeed. "Comprehensive Review of AI, IoT, and ML in Enhancing Urban Mobility and Reducing Carbon Footprints." Paper presented at the 2024 Third International Conference on Sustainable Mobility Applications, Renewables and Technology (SMART), Dubai, United Arab Emirates, November 22--24, 2024. IEEE.
  • Householder, Allen, Vijay Sarvepalli, Jeff Havrilla, Matthew Churilla, Lena Pons, Shing-hon Lau, Nathan VanHoudnos, Andrew Kompanek, and Lauren McIlvenny. Lessons Learned in Coordinated Disclosure for Artificial Intelligence and Machine Learning Systems. Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute, August 2024.
  • Howell, Bronwyn E. "WEIRD? Institutions and Consumers' Perceptions of Artificial Intelligence in 31 Countries." July 23, 2024. SSRN.
  • Hundt, Andrew, Julia Schuller, and Severin Kacianka. "Towards Equitable Agile Research and Development of AI and Robotics." arXiv, February 13, 2024.
  • Hussain, Muhammad, Ioanna Iacovides, Tom Lawton, Vishal Sharma, Zoe Porter, Alice Cunningham, Ibrahim Habli, Shireen Hickey, Yan Jia, Phillip Morgan, and Nee Ling Wong. "Development and Translation of Human-AI Interaction Models into Working Prototypes for Clinical Decision-Making." In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS '24), 1607--1619. July 1, 2024.
  • Hutiri, Wiebke, Orestis Papakyriakopoulos, and Alice Xiang. "Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators." In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24), 359--376. June 5, 2024.
  • Kiviharju, Mikko. "On the Cybersecurity of Logistics in the Age of Artificial Intelligence." In Artificial Intelligence for Security: Enhancing Protection in a Changing World, edited by Tuomo Sipola, Janne Alatalo, Monika Wolfmayr, and Tero Kokkonen, 189--219. Cham: Springer, 2024.
  • Klingbeil, Artur, Cassandra Grützner, and Philipp Schreck. "Trust and Reliance on AI---An Experimental Study on the Extent and Costs of Overreliance on AI." Computers in Human Behavior 160 (November 2024): 108352.
  • Kloza, Dariusz, Thibaut D'hulst, and Malik Aouadi. "What Could Possibly Go Wrong? On Risks to the Rights and Freedoms of Natural Persons in EU Data Protection Law, Their Typologies and Their Identification." Technology and Regulation (2024): 309--329.
  • Knight, Simon, Cormac McGrath, Olga Viberg, and Teresa Cerratto Pargman. "Learning about AI Ethics from Cases: A Scoping Review of AI Incident Repositories and Cases." Research Square, August 23, 2024.
  • Knoll, Alessandra, ed. Desafios do Direito Frente às Novas Tecnologias. 1st ed. Vol. 1. Guarujá-SP: Editora Científica Digital, June 28, 2024.
  • Koh, Benjamin. "Seeking the Golden Thread in the Black Box: Artificial Intelligence and Personal Injury Law." Precedent (Sydney, N.S.W.), no. 183 (July 2024): 46--50. Sydney: Australian Lawyers Alliance.
  • Kowald, Dominik, Sebastian Scher, Viktoria Pammer-Schindler, Peter Müllner, Kerstin Waxnegger, Lea Demelius, Angela Fessl, Maximilian Toller, Inti Gabriel Mendoza Estrada, Ilija Šimić, Vedran Sabol, Andreas Trügler, Eduardo Veas, Roman Kern, Tomislav Nad, and Simone Kopeinik. "Establishing and Evaluating Trustworthy AI: Overview and Research Challenges." Frontiers in Big Data 7 (November 28, 2024).
  • Kox, Esther S., and Beatrice Beretta. "Evaluating Generative AI Incidents: An Exploratory Vignette Study on the Role of Trust, Attitude, and AI Literacy." In HHAI 2024: Hybrid Human AI Systems for the Social Good, 188--198. Vol. 386 of Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press, 2024.
  • Kuilman, Sietze Kai, Luciano Cavalcante Siebert, Stefan Buijsman, and Catholijn M. Jonker. "How to Gain Control and Influence Algorithms: Contesting AI to Find Relevant Reasons." AI and Ethics (June 5, 2024).
  • Laczi, Szandra Anna, and Valéria Póser. "Impact of Deepfake Technology on Children: Risks and Consequences." In Proceedings of the 2024 IEEE 22nd Jubilee International Symposium on Intelligent Systems and Informatics (SISY), Pula, Croatia, September 19--21, 2024. IEEE, 2024.
  • Lanamäki, Arto, Karin Väyrynen, Heidi Hietala, Elena Parmiggiani, and Polyxeni Vasilakopoulou. 2024. "Not Inevitable: Navigating Labor Displacement and Reinstatement in the Pursuit of AI for Social Good." Communications of the Association for Information Systems 55: 831--845.
  • Lawrence, Neil D., and Jessica Montgomery. "Accelerating AI for Science: Open Data Science for Science." Royal Society Open Science 11, no. 8 (August 21, 2024).
  • Lee, Hao-Ping (Hank), Yu-Ju Yang, Thomas Serban Von Davier, Jodi Forlizzi, and Sauvik Das. "Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks." In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), Article 775, 1--19. May 11, 2024.
  • Lee, Sung Une, Harsha Perera, Boming Xia, Yue Liu, Qinghua Lu, and Liming Zhu. "QB4AIRA: A Question Bank for Responsible AI Risk Assessment." IEEE Software (Early Access), December 9, 2024.
  • Lee, Sung Une, Harsha Perera, Yue Liu, Boming Xia, Qinghua Lu, and Liming Zhu. "Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment." arXiv, August 2, 2024.
  • Leibowicz, Claire R., and Christian H. Cardona. "From Principles to Practices: Lessons Learned from Applying Partnership on AI's (PAI) Synthetic Media Framework to 11 Use Cases." arXiv, July 19, 2024.
  • Lu, You, Yifan Tian, Dingji Wang, Bihuan Chen, and Xin Peng. "AdvFuzz: Finding More Violations Caused by the EGO Vehicle in Simulation Testing by Adversarial NPC Vehicles." arXiv (preprint), November 29, 2024.
  • Lu, You, Yifan Tian, Yuyang Bi, Bihuan Chen, and Xin Peng. "DiaVio: LLM-Empowered Diagnosis of Safety Violations in ADS Simulation Testing." In Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2024), 376--388. New York: Association for Computing Machinery, 2024.
  • Maitra, Suvradip, Lyndal Sleep, Suzanna Fay, and Paul Henman. Building a Trauma-Informed Algorithmic Assessment Toolkit. ARC Centre of Excellence for Automated Decision-Making and Society, August 26, 2024.
  • Manheim, David. "Building a Culture of Safety for AI: Comparisons and Challenges." SSRN, July 10, 2024.
  • Mansyl, Vieri, and Windy Gambetta. "A Novel Approach to Explainable AI: Leveraging Ripple Down Rules Algorithm for Knowledge-Based Explanations." In Proceedings of the 2024 11th International Conference on Advanced Informatics: Concept, Theory and Application (ICAICTA), Singapore, September 28--30, 2024. IEEE, 2024.
  • Markovitch, Dmitri G., Rusty A. Stough, and Dongling Huang. "Consumer Reactions to Chatbot Versus Human Service: An Investigation in the Role of Outcome Valence and Perceived Empathy." Journal of Retailing and Consumer Services 79 (July 2024): 103847.
  • May, Richard, Jacob Krüger, and Thomas Leich. "SoK: How Artificial-Intelligence Incidents Can Jeopardize Safety and Security." In Proceedings of the 19th International Conference on Availability, Reliability and Security (ARES '24), Article 44, 1--12. July 30, 2024.
  • McGrath, Quintin. "Responding to the Sharp Rise in AI in the 2023 SIM IT Trends Survey." MIS Quarterly Executive 23, no. 1 (March 2024): Article 8.
  • McGrath, Quintin, Alan R. Hevner, and Gert-Jan de Vreede. "Managing Ethical Risks of Artificial Intelligence in Business Applications." TechRxiv, February 27, 2024.
  • McGregor, Sean. "Open Digital Safety." Computer 57, no. 4 (April 2, 2024): 99--103.
  • McGregor, Sean, Allyson Ettinger, Nick Judd, Paul Albee, Liwei Jiang, Kavel Rao, Will Smith, Shayne Longpre, Avijit Ghosh, Christopher Fiorelli, Michelle Hoang, Sven Cattell, and Nouha Dziri. "To Err Is AI: A Case Study Informing LLM Flaw Reporting Practices." arXiv (preprint), October 15, 2024.
  • Michałkiewicz-Kądziela, Ewa. "Deepfakes: New Challenges for Law and Democracy." In Artificial Intelligence and International Human Rights Law, edited by Michał Balcerzak and Julia Kapelańska-Pręgowska, 145--157. Cheltenham, UK: Edward Elgar Publishing, 2024.
  • Mishra, Saurabh, Anand Rao, Ramayya Krishnan, Bilal Ayyub, Amin Aria, and Enrico Zio. "Reliability, Resilience and Human Factors Engineering for Trustworthy AI Systems." arXiv (preprint), November 13, 2024.
  • Morales, Sergio, Robert Clarisó, and Jordi Cabot. "A DSL for Testing LLMs for Fairness and Bias." In Proceedings of the ACM/IEEE 2024 International Conference on Model Driven Engineering Languages and Systems (MODELS '24), Linz, Austria, September 22--27, 2024.
  • National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. NIST AI 600-1. July 2024.
  • Nedzhvetskaya, Nataliya, and JS Tan. "No Simple Fix: How AI Harms Reflect Power and Jurisdiction in the Workplace." In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24), 422--432. June 5, 2024.
  • Neogi, Trisha. Protecting People with Disabilities: A Guide for Non-Technical Committee Members in Understanding the Regulations Needed to Design Ethical AI. Master's Research Project, OCAD University, May 1, 2024.
  • Palomba, Fabio, Andrea Di Sorbo, Davide Di Ruscio, Filomena Ferrucci, Gemma Catolino, Giammaria Giordano, Dario Di Dario, Gianmario Voria, Viviana Pentangelo, Maria Tortorella, et al. "FRINGE: Context-Aware FaiRness EngineerING in Complex Software Systems." In Proceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM '24), 608--12. New York: Association for Computing Machinery, 2024.
  • Paeth, Kevin, Daniel Atherton, Nikiforos Pittaras, Heather Frase, and Sean McGregor. Lessons for Editors of AI Incidents from the AI Incident Database. September 24, 2024. arXiv.
  • Perez-de-Viñaspre, Olatz, Olatz Arregi, and Itziar Irigoien. "Adimen artifizialeko alborapena ulertzen (Understanding Artificial Intelligence Bias)." Ekaia: Zientzia eta Teknologia Aldizkaria, in press (2025).
  • Pérez-Ugena Coromina, María. "Sesgo de Género (en IA)." EUNOMÍA. Revista en Cultura de la Legalidad 26 (March 14, 2024): 311--330.
  • O'Connor, Mary I. "Equity360: Gender, Race, and Ethnicity---The Power of AI to Improve or Worsen Health Disparities." Clinical Orthopaedics and Related Research 482, no. 4 (April 2024): 591--594.
  • Ortega-Bolaños, Ricardo, Joshua Bernal-Salcedo, Mariana Germán Ortiz, Julian Galeano Sarmiento, Gonzalo A. Ruz, and Reinel Tabares-Soto. "Applying the Ethics of AI: A Systematic Review of Tools for Developing and Assessing AI-Based Systems." Artificial Intelligence Review 57 (April 5, 2024): 110.
  • Rauh, Maribeth, Nahema Marchal, Arianna Manzini, Lisa Anne Hendricks, Ramona Comanescu, Canfer Akbulut, Tom Stepleton, Juan Mateos-Garcia, Stevie Bergman, Jackie Kay, Conor Griffin, Ben Bariach, Iason Gabriel, Verena Rieser, William Isaac, and Laura Weidinger. "Gaps in the Safety Evaluation of Generative AI." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7, no. 1 (2024): 1200--1217.
  • Raus, Rachele, Francesca Bisiani, Maria Margherita Mattioda, and Michela Tonti, eds. Multilinguisme européen et IA entre droit, traduction et didactique des langues / Multilinguismo europeo e IA tra diritto, traduzione e didattica delle lingue / European Multilingualism and Artificial Intelligence: The Impacts on Law, Translation and Language Teaching. Turin: Università di Torino, 2024.
  • Rémy, Nicolas, Frédéric Deschamps, and Stéphane Kreckelbergh. "Construire la confiance des ChatBots à base de LLM." Paper presented at Congrès Lambda Mu 24: Les métiers du risque : clés de la réindustrialisation et de la transition écologique, Institut pour la Maîtrise des Risques (IMdR), Bourges, France, October 2024.
  • Rommetveit, Kjetil, and Ingrid Foss Ballo. D 5.1: Case Study Co-Creation Methodology Report. (How) Can You Build Morality into Artificially Intelligent Systems? SUPER MoRRI -- Scientific Understanding and Provision of an Enhanced and Robust Monitoring System for RRI, Version 2. June 14, 2023.
  • Rupe, Jason, and Chris LaPlante. "Introducing the Reliability Society Failure Database." IEEE Reliability Magazine 1, no. 1 (March 2024): 5--9.
  • Salvador, Cole. "Certified Safe: A Schematic for Approval Regulation of Frontier AI." arXiv, August 12, 2024.
  • Saran, Samir, Anulekha Nandi, and Sameer Patil. 'Moving Horizons': A Responsive and Risk-Based Regulatory Framework for A.I. Special Report. Observer Research Foundation, June 28, 2024.
  • Sengupta, Ushnish. "Black Box Algorithmic Decision-Making and Transparency Challenges in Policing Practice: Lessons from Implementation of New Technologies by the Toronto Police Service." In Policing and Intelligence in the Global Big Data Era, Volume II: New Global Perspectives on the Politics and Ethics of Knowledge, edited by Tereza Østbø Kuldova, Helene Oppen Ingebrigtsen Gundhus, and Christin Thea Wathne, 195--233. Palgrave's Critical Policing Studies. Cham: Palgrave Macmillan, 2024.
  • Sharma, Chinmayi. "AI's Hippocratic Oath." Washington University Law Review, forthcoming. Yale Law & Economics Research Paper. March 14, 2024.
  • Sharma, Kavita, and Padmavati Manchikanti. "Artificial Intelligence and Policy in Healthcare Industry." In Artificial Intelligence in Drug Development: Patenting and Regulatory Aspects, 117--144. Frontiers of Artificial Intelligence, Ethics and Multidisciplinary Applications. Singapore: Springer, 2024.
  • Shams, Rifat Ara, Didar Zowghi, and Muneera Bano. "AI for All: Identifying AI Incidents Related to Diversity and Inclusion." arXiv, July 19, 2024.
  • Shane, Tommy Shaffer. AI Incident Reporting: Addressing a Gap in the UK's Regulation of AI. The Centre for Long-Term Resilience, June 2024.
  • Sharma, Kavita, and Padmavati Manchikanti. "Artificial Intelligence and Policy in Healthcare Industry." In Artificial Intelligence in Drug Development, 117--144. Frontiers of Artificial Intelligence, Ethics and Multidisciplinary Applications. Singapore: Springer, 2024.
  • Shoker, Ali, Rehana Yasmin, and Paulo Esteves-Verissimo. 2024. "WIP: Savvy: A Trustworthy Autonomous Vehicles Architecture." Symposium on Vehicles Security and Privacy (VehicleSec) 2024, San Diego, CA, February 26, 2024.
  • Shoker, Ali, Rehana Yasmin, and Paulo Esteves-Verissimo. "Savvy: Trustworthy Autonomous Vehicles Architecture." arXiv, February 8, 2024.
  • Siqueira de Cerqueira, José Antonio, Mamia Agbese, Rebekah Rousi, Nannan Xi, Juho Hamari, and Pekka Abrahamsson. "Can We Trust AI Agents? An Experimental Study Towards Trustworthy LLM-Based Multi-Agent Systems for AI Ethics." arXiv preprint arXiv:2411.08881 [cs.CY], October 25, 2024.
  • Škoro, Ivana Emily. "Blockchain Art Activism: Examining Four Blockchain-Based Artworks for Social, Political, and Environmental Good." Master's thesis, Aalborg University and Media Arts Cultures Consortium, June 7, 2024.
  • Spinner, Thilo, Daniel Fürst, and Mennatallah El-Assady. "iNNspector: Visual, Interactive Deep Model Debugging." arXiv, July 25, 2024.
  • Soudi, Marwa Samih, and Merja Bauters. "AI Guidelines and Ethical Readiness Inside SMEs: A Review and Recommendations." Digital Society 3 (January 31, 2024): Article 3.
  • Stanley, Jeff, and Hannah Lettie. Emerging Risks and Mitigations for Public Chatbots: LILAC v1. MTR240382. McLean, VA: The MITRE Corporation, September 2024.
  • Torkamaan, Helma, Mohammad Tahaei, Stefan Buijsman, Ziang Xiao, Daricia Wilkinson, and Bart P. Knijnenburg. "The Role of Human-Centered AI in User Modeling, Adaptation, and Personalization---Models, Frameworks, and Paradigms." In A Human-Centered Perspective of Intelligent Personalized Environments and Systems, edited by Bruce Ferwerda, Mark Graus, Panagiotis Germanakos, and Marko Tkalčič, 43--84. Human--Computer Interaction Series. Cham: Springer, 2024.
  • Tran, Michelle, and Casey Fiesler. "'It's Not Exactly Meant to Be Realistic': Student Perspectives on the Role of Ethics in Computing Group Projects." In Proceedings of the 2024 ACM Conference on International Computing Education Research (ICER '24), 517--526. August 12, 2024.
  • Tuovinen, Lauri, and Kimmo Halunen. "What Is an AI Vulnerability, and Why Should We Care? Unpacking the Relationship Between AI Security and AI Ethics." In Proceedings of the Conference on Technology Ethics 2024 (Tethics 2024), edited by Thomas Olsson et al., 30--41. CEUR Workshop Proceedings 3901. RWTH Aachen, November 7, 2024.
  • Tyukin, Ivan Y., Tatiana Tyukina, Daniël P. van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, and Penelope Allison. "Coping with AI Errors with Provable Guarantees." Information Sciences 678 (September 2024): 120856.
  • Tyukin, Ivan Y., Tatiana Tyukina, Daniël P. van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, and Penelope Allison. "Weakly Supervised Learners for Correction of AI Errors with Provable Performance Guarantees." arXiv, February 13, 2024.
  • Uzwyshyn, Raymond. "Building Library Artificial Intelligence Capacity: Research Data Repositories and Scholarly Ecosystems." In New Horizons in Artificial Intelligence in Libraries, edited by Andrew Cox, Edmund Balnaves, Leda Bultrini, and Raymond Uzwyshyn, 121--140. Berlin: De Gruyter, 2024.
  • Verma, Karishma. "Digital Deception: The Impact of Deepfakes on Privacy Rights." Lex Scientia Law Review 8, no. 2 (2024): 859--896.
  • Vidgen, Bertie, Adarsh Agrawal, Ahmed M. Ahmed, Victor Akinwande, Namir Al-Nuaimi, Najla Alfaraj, Elie Alhajjar, Lora Aroyo, Trupti Bavalatti, Max Bartolo, Borhane Blili-Hamelin, Kurt Bollacker, Rishi Bomassani, Marisa Ferrara Boston, Siméon Campos, Kal Chakra, Canyu Chen, Cody Coleman, Zacharie Delpierre Coudert, Leon Derczynski, Debojyoti Dutta, Ian Eisenberg, James Ezick, Heather Frase, Brian Fuller, Ram Gandikota, Agasthya Gangavarapu, Ananya Gangavarapu, James Gealy, Rajat Ghosh, James Goel, Usman Gohar, Sujata Goswami, Scott A. Hale, Wiebke Hutiri, Joseph Marvin Imperial, Surgan Jandial, Nick Judd, Felix Juefei-Xu, Foutse Khomh, Bhavya Kailkhura, Hannah Rose Kirk, Kevin Klyman, Chris Knotz, Michael Kuchnik, Shachi H. Kumar, Srijan Kumar, Chris Lengerich, Bo Li, Zeyi Liao, Eileen Peters Long, Victor Lu, Sarah Luger, Yifan Mai, Priyanka Mary Mammen, Kelvin Manyeki, Sean McGregor, Virendra Mehta, Shafee Mohammed, Emanuel Moss, Lama Nachman, Dinesh Jinenhally Naganna, Amin Nikanjam, Besmira Nushi, Luis Oala, Iftach Orr, Alicia Parrish, Cigdem Patlak, William Pietri, Forough Poursabzi-Sangdeh, Eleonora Presani, Fabrizio Puletti, Paul Röttger, Saurav Sahay, Tim Santos, Nino Scherrer, Alice Schoenauer Sebag, Patrick Schramowski, Abolfazl Shahbazi, Vin Sharma, Xudong Shen, Vamsi Sistla, Leonard Tang, Davide Testuggine, Vithursan Thangarasa, Elizabeth Anne Watkins, Rebecca Weiss, Chris Welty, Tyler Wilbers, Adina Williams, Carole-Jean Wu, Poonam Yadav, Xianjun Yang, Yi Zeng, Wenhui Zhang, Fedor Zhdanov, Jiacheng Zhu, Percy Liang, Peter Mattson, and Joaquin Vanschoren. "Introducing v0.5 of the AI Safety Benchmark from MLCommons." arXiv, May 13, 2024.
  • Viureanu, Andrei, and Bogdan Ionescu. "AI Vulnerabilities." Paper presented at the 1st Workshop on Artificial Intelligence for Multimedia, AI Multimedia Lab, CAMPUS Research Institute, POLITEHNICA Bucharest, Bucharest, Romania, November 8, 2024.
  • Volkova, Svetlana. "The Dark Side of Deepfakes: Fraud and Cybercrime." In Deepfake Technology Applications and Societal Implications, edited by Gaurav Gupta, Kapil Pandla, Raj K. Kovid, and Sailaja Bohara, 221--242. Hershey, PA: IGI Global, 2024.
  • Wang, Runfeng. Examining Algorithmic Bias Toward Racial Minorities in New Media. Syracuse University, 2024. Renée Crown University Honors Thesis Projects.
  • Warren, Sarah Egan. "Navigating the Changes That AI Is Bringing to Higher Education." UNC System Learning and Technology Journal 2, no. 1 (August 26, 2024): Special Issue - Exploring the Transformative Impact of Artificial Intelligence in Higher Education: Challenges, Opportunities, and Ethical Considerations.
  • Wei, Mengyi, Chenjing Jiao, Chenyu Zuo, Lorenz Hurni, and Liqiu Meng. "How Generative AI Supports Understanding of an Ethically Sensitive AI-Induced Event." Abstracts of the International Cartographic Association 8 (2024): 26. Presented at the 2024 ICA Workshop on AI, Geovisualization, and Analytical Reasoning -- CartoVis24, University of Warsaw, Poland, September 7, 2024.
  • Wodi, Alex. "Artificial Intelligence (AI) Governance: An Overview." May 24, 2024. SSRN.
  • World Economic Forum. Generative AI Governance: Shaping a Collective Global Future in Collaboration with Accenture. AI Governance Alliance Briefing Paper Series, January 18, 2024.
  • Worth, Sophia, Ben Snaith, Arunav Das, Gefion Thuermer, and Elena Simperl. "AI Data Transparency: An Exploration through the Lens of AI Incidents." arXiv, September 5, 2024.
  • Xi, Ran. 2024. "A Systems Approach to Shedding Sunlight on AI Black Boxes." Hofstra Law Review 53 (3), forthcoming 2025.
  • Xu, Wei, Zaifeng Gao, and Marvin Dainoff. "An HCAI Methodological Framework (HCAI-MF): Putting It Into Action to Enable Human-Centered AI." arXiv, originally submitted November 27, 2023, last revised December 21, 2024.
  • Xu, Wei. "A New Design Philosophy: Human-Centered Artificial Intelligence." In Human-AI Interaction: Enabling Human-Centered AI. Beijing: Tsinghua University Press, forthcoming 2024.
  • Xu, Wei. "A 'User Experience 3.0 (UX 3.0)' Paradigm Framework: User Experience Design for Human-Centered AI Systems." arXiv, March 7, 2024.
  • Xu, Wei, and Zaifeng Gao. "An Intelligent Sociotechnical Systems (iSTS) Concept: Toward a Sociotechnically-Based Hierarchical Human-Centered AI Approach." arXiv, July 22, 2024.
  • Xu, Wei, and Zaifeng Gao. "Enabling Human-Centered AI: A Methodological Perspective." In 2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS), Toronto, ON, May 15--17, 2024. IEEE, June 19, 2024.
  • Xu, Wei, Zaifeng Gao, and Liezhong Ge. "New Research Paradigms and Agenda of Human Factors Science in the Intelligence Era." Acta Psychologica Sinica 56, no. 3 (2024): 363--382.
  • Yang, Khoo Wei. Data Relationality: Privacy in the AI Age. Kuala Lumpur: Khazanah Research Institute, October 24, 2024.
  • Yeung, Karen. "Beyond 'AI Boosterism.'" IPPR Progressive Review 31, no. 2 (2024): 114--20.
  • Ződi, Zsolt. "The Conflict of the Engineering and the Legal Mindset in the Artificial Intelligence Act." SSRN, last revised November 6, 2024.

2023

Citations in peer-reviewed journal articles, book chapters, and preprints

  • Ali, S. A., Khan, R., & Ali, S. N. (2023). The Promises and Perils of Artificial Intelligence: An Ethical and Social Analysis. In S. Chakraborty (Ed.), Investigating the Impact of AI on Ethics and Spirituality (pp. 1-24). IGI Global.
  • Apruzzese, G., Anderson, H. S., Dambra, S., Freeman, D., Pierazzi, F., & Roundy, K. (2023). "Real attackers don't compute gradients": Bridging the gap between adversarial ML research and practice. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 1-10). IEEE.
  • Bach, T. A., Kristiansen, J. K., Babic, A., & Jacovi, A. (2023). Unpacking human-AI interaction in safety-critical industries: A systematic literature review. arXiv.
  • Baeza-Yates, R. (2023). An introduction to responsible AI. European Review, 31(4), 406-421.
  • Batool, A., Zowghi, D., & Bano, M. (2023). Responsible AI governance: A systematic literature review. arXiv.
  • Bommasani, R., Klyman, K., Longpre, S., Kapoor, S., Maslej, N., Xiong, B., Zhang, D., & Liang, P. (2023). The Foundation Model Transparency Index. arXiv.
  • Bondi-Kelly, E., Hartvigsen, T., Sanneman, L. M., Sankaranarayanan, S., Harned, Z., Wickerson, G., Gichoya, J. W., Oakden-Rayner, L., Celi, L. A., Lungren, M. P., Shah, J. A., & Ghassemi, M. (2023). Taking off with AI: Lessons from aviation for healthcare. In EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (Article No. 4, pp. 1-14). ACM.
  • Chatterjee, R. (2023). The scope of roboethics in business ethics. 3D... IBA Journal of Management & Leadership, 14(2), 22-27.
  • Chen, P.-Y., & Liu, S. (2024). Holistic adversarial robustness of deep learning models. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15411-15420.
  • Di Mascio, T., Caruso, F., & Peretti, S. (2023). How to make an artificial intelligence algorithm "ecological"? Insights from a holistic perspective. In CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter (Article No. 21, pp. 1-7). ACM.
  • Faivre, J. (2023). The AI Act: Towards global effects? SSRN.
  • Feffer, M., Martelaro, N., & Heidari, H. (2023). The AI Incident Database as an educational tool to raise awareness of AI harms: A classroom exploration of efficacy, limitations, & future improvements. EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (Article No. 3, pp. 1-11).
  • Greser, J. (2023). Kilka uwag o cyberbezpieczeństwie medycznej AI. In A. Szczęsna & M. Stachoń (Eds.), Cyberbezpieczeństwo AI. AI w cyberbezpieczeństwie (pp. 73-81). NASK - Państwowy Instytut Badawczy. ISBN 978-83-65448-55-2.
  • Groza, A., & Marginean, A. (2023). Brave new world: AI in teaching and learning. In ICERI2023 Proceedings (pp. 8706-8713). Technical University of Cluj-Napoca.
  • Groza, A., & Marginean, A. (2023). Brave new world: Artificial intelligence in teaching and learning. arXiv.
  • Hadshar, R. (2023). A review of the evidence for existential risk from AI via misaligned power-seeking. arXiv.
  • Hong, Y., Lian, J., Xu, L., Min, J., Wang, Y., & Freeman, L. J. (2023). Statistical perspectives on reliability of artificial intelligence systems. Quality Engineering, 35(1), 56-78.
  • Huang, R., Holzapfel, A., Sturm, B., & Kaila, A.-K. (2023). Beyond diverse datasets: Responsible MIR, interdisciplinarity, and the fractured worlds of music. Transactions of the International Society for Music Information Retrieval, 6(1), 43-59.
  • Inoue, S., Nguyen, M.-T., Mizokuchi, H., Nguyen, T.-A. D., Nguyen, H.-H., & Le, D. T. (2023). Towards safer operations: An expert-involved dataset of high-pressure gas incidents for preventing future failures. arXiv.
  • Kanade, A., Bhoite, S., Kanade, S., & Jain, N. (2023). Artificial Intelligence and Morality: A Social Responsibility. Journal of Intelligence Studies in Business, 13(1).
  • Kilhoffer, Z., Nlkolich, A., Sanfilippo, M. R., & Zhou, Z. (2023). AI accountability policy. School of Information Sciences, University of Illinois at Urbana-Champaign.
  • Larsonneur, C. (2023). L'algorithme sert-il les traducteurs ? Conditions et contexte de travail avec les outils de traduction neuronale. In O. Guillon & S. Pickford (Eds.), Approches socio-économiques de la traduction littéraire (Vol. 35, Issue 2, pp. 90-103). Parallèles.
  • Lupo, G. (2023). Risky artificial intelligence: The role of incidents in the path to AI regulation. Law, Technology and Humans, 5(1), 133-152. Faculty of Law, Queensland University of Technology.
  • Marres, N., & Sormani, P. (2023). Testing 'AI': Do we have a situation? A conversation. Universität Siegen.
  • McConvey, K., Guha, S., & Kuzminykh, A. (2023). A human-centered review of algorithms in decision-making in higher education. In CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Article No. 223, pp. 1-15). ACM.
  • McGregor, S. (2023). A scaled multiyear responsible artificial intelligence impact assessment. Computer, 56(8), 20-27.
  • McGregor, S., & Hostetler, J. (2023). Data-centric governance. arXiv.
  • Morgan, P. (2023). Tort liability and autonomous systems accidents: Challenges and future developments. In P. Morgan (Ed.), Tort liability and autonomous systems accidents (pp. 1-26). Edward Elgar Publishing.
  • Pan, C., Gao, Y., & Gu, A. (2023). Modeling operational profile for AI systems: A case study on UAV systems. In 2023 4th International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI) (pp. 1-8). IEEE.
  • Pletcher, S. N. (2023, September 1). Starting Slowly to Go Fast Deep Dive in the Context of AI Pilot Projects.
  • Pletcher, S. (2023). Visual privacy: Current and emerging regulations around unconsented video analytics in retail. arXiv.
  • Rodrigues, R., Resseguier, A., & Santiago, N. (2023). When artificial intelligence fails: The emerging role of incident databases. Public Governance, Administration and Finances Law Review, 8(2), 17-28.
  • Rousi, R., Samani, H., Mäkitalo, N., Vakkuri, V., Linkola, S., Kemell, K.-K., Daubaris, P., Fronza, I., Mikkonen, T., & Abrahamsson, P. (2024). Business and ethical concerns in domestic conversational generative AI-empowered multi-robot systems. In S. Hyrynsalmi, J. Münch, K. Smolander, & J. Melegati (Eds.), Software Business: 14th International Conference, ICSOB 2023, Lahti, Finland, November 27--29, 2023, Proceedings (pp. 173-189). Springer.
  • Schloetzer, J. D., & Yoshinaga, K. (2023). Algorithmic hiring systems: Implications and recommendations for organisations and policymakers. In YSEC Yearbook of Socio-Economic Constitutions 2023: Law and the governance of artificial intelligence (pp. 213-246). Springer.
  • Schloetzer, J. D., & Yoshinaga, K. (2023). Algorithmic hiring systems: Implications and recommendations for organisations and policymakers. Law and the Governance of Artificial Intelligence, Yearbook of Socio-Economic Constitutions. Springer, Cham.
  • Shaffer Shane, T. (2023). AI incidents and 'networked trouble': The case for a research agenda. Big Data & Society, 10(2).
  • Shoker, S., Reddie, A., Barrington, S., Booth, R., Brundage, M., Chahal, H., Depp, M., Drexel, B., Gupta, R., Favaro, M., Hecla, J., Hickey, A., Konaev, M., Kumar, K., Lambert, N., Lohn, A., O'Keefe, C., Rajani, N., Sellitto, M., Trager, R., Walker, L., Wehsener, A., & Young, J. (2023). Confidence-building measures for artificial intelligence: Workshop proceedings. arXiv.
  • Silicki, K. (2023). Cyberbezpieczeństwo systemów wykorzystujących sztuczną inteligencję w świetle raportów ENISA. In A. Szczęsna & M. Stachoń (Eds.), Cyberbezpieczeństwo AI. AI w cyberbezpieczeństwie (pp. 10-21). NASK - Państwowy Instytut Badawczy.
  • Sood, S., & Kim, A. (2023). The golden age of the big data audit: Agile practices and innovations for e-commerce, post-quantum cryptography, psychosocial hazards, artificial intelligence algorithm audits, and deepfakes. International Journal of Innovation and Economic Development, 9(2), 7-23.
  • Stoica, A.-A., & Pica, Ș. (2023). Drones and the ethical politics of public monitoring. Challenges of the Knowledge Society. Public Law, 337-345.
  • Turri, V., & Dzombak, R. (2023). Why we need to know more: Exploring the state of AI incident documentation practices. AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 576-583.
  • Velichkovska, B., Denkovski, D., Gjoreski, H., Kalendar, M., & Osmani, V. (2023). A Survey of Bias in Healthcare: Pitfalls of Using Biased Datasets and Applications. In Artificial Intelligence Application in Networks and Systems (CSOC 2023) (pp. 570-584). Lecture Notes in Networks and Systems, volume 724. Springer.
  • Watson, E., Viana, T., & Zhang, S. (2023). Augmented behavioral annotation tools, with application to multimodal datasets and models: A systematic review. AI, 4(1), 128-171.
  • Winter, C., Hollman, N., & Manheim, D. (2023). Value Alignment for Advanced Artificial Judicial Intelligence. American Philosophical Quarterly, 60(2), 187-203.
  • Wright, L. S. (2023). Artificial intelligence: Why we need it and why we need to be cautious. In M. Lovell, O. S. Moghraby, & R. Waller (Eds.), Digital Mental Health: From Theory to Practice (pp. 60-71). Cambridge University Press.
  • Wu, W., & Liu, S. (2023). A comprehensive review and systematic analysis of artificial intelligence regulation policies. arXiv.
  • Xia, B., Lu, Q., Perera, H., Zhu, L., Xing, Z., Liu, Y., & Whittle, J. (2023). Towards concrete and connected AI risk assessment (C2AIRA): A systematic mapping study. In 2023 IEEE/ACM 2nd International Conference on AI Engineering -- Software Engineering for AI (CAIN) (pp. 27-34). IEEE.
  • Xia, B., Lu, Q., Perera, H., Zhu, L., Xing, Z., Liu, Y., & Whittle, J. (2023). Towards concrete and connected AI risk assessment (C2AIRA): A systematic mapping study. arXiv.
  • Xu, W. (2023). User-centered design (IX): A "user experience 3.0" paradigm framework in the intelligence era. arXiv.
  • Xu, W., & Dainoff, M. (2023). Enabling human-centered AI: A new junction and shared journey between AI and HCI communities. Interactions, 30(1), 42-47.
  • Xu, W., & Dainoff, M. (2023). Enabling human-centered AI: A new junction and shared journey between AI and HCI communities. arXiv.
  • Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2023). Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. International Journal of Human--Computer Interaction, 39(3), 494-518.
  • Zhan, X., Sun, H., & Miranda, S. M. (2023). How does AI fail us? A typological theorization of AI failures. In ICIS 2023 Proceedings: AI in Business and Society.
  • Zhou, L., Moreno-Casares, P. A., Martínez-Plumed, F., Burden, J., Burnell, R., Cheke, L., Ferri, C., Marcoci, A., Mehrbakhsh, B., Moros-Daval, Y., Ó hÉigeartaigh, S., Rutar, D., Schellaert, W., Voudouris, K., & Hernández-Orallo, J. (2023). Predictable artificial intelligence. arXiv.
  • Zhu, Y. (Zhu Yu 朱禹), Chen, G. (Chen Guanze 陈关泽), Lu, Y. (Lu Yongrong 陆泳溶), & Fan, W. (Fan Wei 樊伟). (2023). Generative Artificial Intelligence Governance Action Framework: Content Analysis Based on AIGC Incident Report Texts. 图书情报知识 (Library and Information Knowledge), 40(4), 41-51.
  • Žunić, L., Đukanović, G., & Popović, G. (2023). Rizici vještačke inteligencije: Analiza i implikacije. In 15th International Conference "Information Technology and Application" (ITeO 2023) (Vol. 15, pp. 29-40). Banja Luka, Bosnia and Herzegovina.

Citations in briefs, theses, white papers, and mixed genres

  • Acion, L., Rajngewerc, M., Randall, G., & Etcheverry, L. (2023). Generative AI poses ethical challenges for open science. Nature Human Behaviour, 7(1800--1801).
  • Antunović, J. (2023). Sigurnost komunikacije u kritičnoj infrastrukturi [Undergraduate thesis, Sveučilište u Zagrebu, Fakultet prometnih znanosti]. Repozitorij Fakulteta prometnih znanosti.
  • Agnew, W. (2023). AI ethics and critique for robotics (Publication No. 30636400) [Doctoral dissertation, University of Washington]. ProQuest Dissertations & Theses Global.
  • Attard-Frost, B., & Widder, D. G. (2023). The Ethics of AI Value Chains. arXiv.
  • Bogusz, I. C., & Johnson, D. (2023). AI for the benefit of society: Progress with trust and transparency. Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Informatics and Media. Stockholm: Fores.
  • D'Albergo, E., Fasciani, T., & Giovanelli, G. (2023, January 19-21). Social Powers and Governance of Artificial Intelligence in Urban Security Policies: Video Surveillance in Turin. Paper presented at Re-assembling the social. Re(i)stituting the social. 40 years of AIS, Naples, Italy.
  • Desouza, K. C., & Dawson, G. S. (2023). Pathways to trusted progress with artificial intelligence. IBM Center for The Business of Government.
  • Duarte, A. B. F. (2023). Enhancing portuguese public services: Prototype of a mobile application with a digital assistant [Trabalho de projeto de mestrado, Escola Superior de Comunicação Social]. Instituto Politécnico de Lisboa, Escola Superior de Comunicação Social.
  • Giannini, A. (2023). Criminal behavior and accountability of artificial intelligence systems (Doctoral dissertation, University of Florence and Maastricht University).
  • Hoffmann, M., & Frase, H. (2023). Adding structure to AI harm: An introduction to CSET's AI harm framework. Center for Security and Emerging Technology.
  • Isbell, C., Littman, M. L., & Norvig, P. (2023). Viewpoint: Software Engineering of Machine Learning Systems. Communications of the ACM, 66(2), 35-37.
  • Knight, S., Heggart, K., Dickson-Deane, C., Ford, H., Hunter, J., Johns, A., Kitto, K., Cetindamar Kozanoglu, D., Maher, D., & Narayan, B. (2023). Submission in response to the House Standing Committee on Employment, Education and Training's inquiry into the use of generative artificial intelligence in the Australian education system. House Standing Committee on Employment, Education and Training's inquiry into the use of generative artificial intelligence in the Australian education system.
  • Kutz, J., Göbels, V. P., Brajovic, D., Fresz, B., Renner, N., Omri, S., Neuhüttler, J., Huber, M., & Bienzeisler, B. (2023). KI-Zertifizierung und Absicherung im Kontext des EU AI Act: Herausforderungen und Bedürfnisse aus Sicht von Unternehmen. Fraunhofer IAO.
  • Longstaff, T. (2023). SEI Thoughts on AI T and E and Related Topics. (Technical Report). Carnegie-Mellon University, Pittsburgh, PA. Air Force Life Cycle Management Center, Hanscom AFB, MA. Retrieved from Accession Number: AD1199686.
  • Massei, G. (2023). Algorithmic Trading: An Overview and Evaluation of Its Impact on Financial Markets [Master's thesis, Università Ca' Foscari Venezia].
  • Musser, M., Lohn, A., Dempsey, J. X., Spring, J., Kumar, R. S. S., Leong, B., Liaghati, C., Martinez, C., Grant, C. D., Rohrer, D., Frase, H., Bansemer, J., Rodriguez, M., Regan, M., Chowdhury, R., & Hermanek, S. (2023). Adversarial machine learning and cybersecurity: Risks, challenges, and legal implications. Center for Security and Emerging Technology.
  • Narayanan, M., Seymour, A., Frase, H., & Elmgren, K. (2023). Repurposing the wheel: Lessons for AI standards (Workshop Report). Center for Security and Emerging Technology.
  • NIST. Risk Management Playbook. 2023
  • Sharma, A. (2023). Testing of machine learning algorithms and models (PhD dissertation). Universität Oldenburg.
  • Shneiderman, B. (2023). ACM TechBrief: Safer algorithmic systems (Issue 6). Association for Computing Machinery.
  • Sivakumaran, A. (2023). Investigating consumer perception and speculative AI labels for creative AI usage in media (Master's thesis, KTH, School of Electrical Engineering and Computer Science).
  • Toner, H., Ji, J., Bansemer, J., Lim, L., Painter, C., Corley, C., Whittlestone, J., Botvinick, M., Rodriguez, M., & Kumar, R. S. S. (2023). Skating to where the puck is going: Anticipating and managing risks from frontier AI systems. Center for Security and Emerging Technology.
  • Wang, L. (2023). An urgency for inclusivity: Redesigning datasets for improved representation of LGBTQ+ identity terms in artificial intelligence (A.I.). HSS4 - The Modern Context: Queer Theory and Politics, Professor Barnick.
  • Zhang, J. (2023). Evaluating Artificial Neural Network Robustness for Safety-Critical Systems [Ph.D. dissertation, Technical University of Denmark]. Kgs. Lyngby: Technical University of Denmark.

2022

  • Macrae, Carl. "Learning from the failure of autonomous and intelligent systems: Accidents, safety, and sociotechnical sources of risk." Risk analysis 42.9 (2022): 1999-2025.
  • Felländer, Anna, et al. "Achieving a Data-driven Risk Assessment Methodology for Ethical AI." Digital Society 1.2 (2022): 13.
  • Apruzzese, Giovanni, et al. "" Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice." arXiv preprint arXiv:2212.14315 (2022).
  • Petersen, Eike, et al. "Responsible and regulatory conform machine learning for medicine: A survey of challenges and solutions." IEEE Access 10 (2022): 58375-58418.
  • Schuett, Jonas. "Three lines of defense against risks from AI." arXiv preprint arXiv:2212.08364 (2022).
  • Schiff, Daniel S. "Looking through a policy window with tinted glasses: Setting the agenda for US AI policy." Review of Policy Research.
  • Neretin, Oleksii, and Vyacheslav Kharchenko. "Model for Describing Processes of AI Systems Vulnerabilities Collection and Analysis using Big Data Tools." 2022 12th International Conference on Dependable Systems, Services and Technologies (DESSERT). IEEE, 2022.
  • Durso, Francis, et al. "Analyzing Failures in Artificial Intelligent Learning Systems (FAILS)." 2022 IEEE 29th Annual Software Technology Conference (STC). IEEE, 2022.
  • Kassab, Mohamad, Joanna DeFranco, and Phillip Laplante. "Investigating Bugs in AI-Infused Systems: Analysis and Proposed Taxonomy." 2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). IEEE, 2022.
  • Braga, Juliao, et al. "Projeto para o desenvolvimento de um artigo sobre governança de algoritmos e dados." (2022).
  • Secchi, Carlo, and Alessandro Gili. "Digitalisation for sustainable infrastructure: the road ahead." Digitalisation for sustainable infrastructure (2022): 1-326.
  • Groza, Adrian, et al. "Elaborarea cadrului strategic nat, ional în domeniul inteligent, ei artificiale."
  • Braga, Juliao, et al. "Project for the Development of a Paper on Algorithm and Data Governance." (2022). (Original Portuguese).
  • NIST. Risk Management Playbook. 2022
  • Shneiderman, Ben. Human-Centered AI. Oxford University Press, 2022.
  • Schwartz, Reva, et al. "Towards a Standard for Identifying and Managing Bias in Artificial Intelligence." (2022).
  • McGrath, Quintin et al. An Enterprise Risk Management Framework to Design Pro-Ethical AI Solutions." University of South Florida. (2022).
  • Nor, Ahmad Kamal Mohd, et al. "Abnormality Detection and Failure Prediction Using Explainable Bayesian Deep Learning: Methodology and Case Study of Real-World Gas Turbine Anomalies." (2022).
  • Xie, Xuan, Kristian Kersting, and Daniel Neider. "Neuro-Symbolic Verification of Deep Neural Networks." arXiv preprint arXiv:2203.00938 (2022).
  • Hundt, Andrew, et al. "Robots Enact Malignant Stereotypes." 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022.
  • Tidjon, Lionel Nganyewou, and Foutse Khomh. "Threat Assessment in Machine Learning based Systems." arXiv preprint arXiv:2207.00091 (2022).
  • Naja, Iman, et al. "Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information." IEEE Access 10 (2022): 74383-74411.
  • Cinà, Antonio Emanuele, et al. "Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning." arXiv preprint arXiv:2205.01992 (2022).
  • Schröder, Tim, and Michael Schulz. "Monitoring machine learning models: A categorization of challenges and methods." Data Science and Management (2022).
  • Corea, Francesco, et al. "A principle-based approach to AI: the case for European Union and Italy." AI & SOCIETY (2022): 1-15.
  • Carmichael, Zachariah, and Walter J. Scheirer. "Unfooling Perturbation-Based Post Hoc Explainers." arXiv preprint arXiv:2205.14772 (2022).
  • Wei, Mengyi, and Zhixuan Zhou. "AI Ethics Issues in Real World: Evidence from AI Incident Database." arXiv preprint arXiv:2206.07635 (2022).
  • Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions." IEEE Access (2022).
  • Karunagaran, Surya, Ana Lucic, and Christine Custis. "XAI Toolsheet: Towards A Documentation Framework for XAI Tools."
  • Paudel, Shreyasha, and Aatiz Ghimire. "AI Ethics Survey in Nepal."
  • Ferguson, Ryan. "Transform Your Risk Processes Using Neural Networks."
  • Fujitsu Corporation. "AI Ethics Impact Assessment Casebook," 2022
  • Shneiderman, Ben and Du, Mengnan. "Human-Centered AI: Tools" 2022
  • Salih, Salih. "Understanding Machine Learning Interpretability." Medium. 2022
  • Garner, Carrie. "Creating Transformative and Trustworthy AI Systems Requires a Community Effort." Software Engineering Institute. 2022
  • Weissinger, Laurin, AI, Complexity, and Regulation (February 14, 2022). The Oxford Handbook of AI Governance

2021

  • Arnold, Z., Toner, H., CSET Policy. "AI Accidents: An Emerging Threat." (2021).
  • Aliman, Nadisha-Marie, Leon Kester, and Roman Yampolskiy. "Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions." Philosophies 6.1 (2021): 6.
  • Falco, Gregory, and Leilani H. Gilpin. "A stress testing framework for autonomous system verification and validation (v&v)." 2021 IEEE International Conference on Autonomous Systems (ICAS). IEEE, 2021.
  • Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Technical Challenges and Solutions." arXiv preprint arXiv:2107.09546 (2021).
  • John-Mathews, Jean-Marie. AI ethics in practice, challenges and limitations. Diss. Université Paris-Saclay, 2021.
  • Macrae, Carl. "Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety and Sociotechnical Sources of Risk." Safety and Sociotechnical Sources of Risk (June 4, 2021) (2021).
  • Hong, Matthew K., et al. "Planning for Natural Language Failures with the AI Playbook." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.
  • Ruohonen, Jukka. "A Review of Product Safety Regulations in the European Union." arXiv preprint arXiv:2102.03679 (2021).
  • Kalin, Josh, David Noever, and Matthew Ciolino. "A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models." arXiv preprint arXiv:2103.02718 (2021).
  • Aliman, Nadisha Marie, and Leon Kester. "Epistemic defenses against scientific and empirical adversarial AI attacks." CEUR Workshop Proceedings. Vol. 2916. CEUR WS, 2021.
  • John-Mathews, Jean-Marie. L’Éthique de l’Intelligence Artificielle en Pratique. Enjeux et Limites. Diss. université Paris-Saclay, 2021.
  • Smith, Catherine. "Automating intellectual freedom: Artificial intelligence, bias, and the information landscape." IFLA Journal (2021): 03400352211057145

If you have a scholarly work that should be added here, please contact us.

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d414e0f