Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Related Work

While formal AI incident research is relatively new, a number of people have been collecting what could be considered incidents. These include,

  • Awesome Machine Learning Interpretability: AI Incident Tracker
  • AI and Algorithimic Incidents and Controversies of Charlie Pownall
  • Map of Helpful and Harmful AI

If you have an incident resource that could be added here, please contact us.

The following publications have been indexed by Google scholar as referencing the database itself, rather than solely individual incidents. Please contact us if your reference is missing.

Responsible AI Collaborative Research

Where needed to serve the broader safety and fairness communities, the Collab produces and sponsors research. Works to date include the following.

  • The original research publication released at the public announcement of the AI Incident Database. All citations of this work will be added to this page.
    McGregor, Sean. "Preventing repeated real world AI failures by cataloging incidents: The AI incident database." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 17. 2021.
  • A major update to the incident definitions and criteria as presented at the 2022 NeurIPS Workshop on Human-Centered AI.
    McGregor, Sean, Kevin Paeth, and Khoa Lam. "Indexing AI Risks with Incidents, Issues, and Variants." arXiv preprint arXiv:2211.10384 (2022).
  • Our approach to reducing the uncertainty of incident causes when analyzing open source incident reports. Presented at SafeAI.
    Pittaras, Nikiforos, and Sean McGregor. "A taxonomic system for failure cause analysis of open source AI incidents." arXiv preprint arXiv:2211.07280 (2022).
  • Important lessons learned from editing AI incidents, focusing on issues related to their temporal ambiguity, multiplicity, large-scale exposure harms, and inherent uncertainty in reporting. Submitted to the 2025 Conference on Innovative Applications of Artificial Intelligence (IAAI-25).
    Paeth, Kevin, Daniel Atherton, Nikiforos Pittaras, Heather Frase, Sean McGregor. "Lessons for Editors of AI Incidents from the AI Incident Database." arXiv preprint arXiv:2409.16425 (2024)

2026 (Through February 15th)

  • Abuadbba, Alsharif, Nazatul Sultan, Surya Nepal, and Sanjay Jha. "Human Society-Inspired Approaches to Agentic AI Security: The 4C Framework." arXiv preprint arXiv:2602.01942 (2026). https://doi.org/10.48550/arXiv.2602.01942.
  • Anderson, John. "Data-Centric Governance and Trustworthy Artificial Intelligence for Ethical Welfare Management Systems." EuroLexis Research Index of International Multidisciplinary Journal for Research & Development 13, no. 01 (2026): 1075-1081. https://researchcitations.org/index.php/elriijmrd/article/view/94.
  • Assalaarachchi, Lakshana Iruni, Zainab Masood, Rashina Hoda, and John Grundy. “Toward Agentic Software Project Management: A Vision and Roadmap.” arXiv (cs.SE), January 23, 2026. (Author’s preprint accepted for AGENT workshop at ICSE 2026.) https://doi.org/10.48550/arXiv.2601.16392.
  • Bentley, Kate H., Luca Belli, Adam M. Chekroud, Emily J. Ward, Emily R. Dworkin, Emily Van Ark, Kelly M. Johnston, Will Alexander, Millard Brown, and Matt Hawrilenko. “VERA-MH: Reliability and Validity of an Open-Source AI Safety Evaluation in Mental Health.” arXiv (cs.AI), revised February 17, 2026. https://doi.org/10.48550/arXiv.2602.05088.
  • Chaudhry, Mohit. “The Licensing Landscape for Responsible, Open-Source AI.” In Democratising AI: Towards Open, Decentralised AI Ecosystems, edited by Basu Chandola and Anirban Sarma, 98–111. New Delhi: Observer Research Foundation, 2026. https://www.orfonline.org/public/uploads/upload/20260211094122.pdf.
  • Çılgın, Turgut. “Dialectical Analysis of Problems Created by Technological Developments in the Labour Market.” Sosyal Siyaset Konferansları Dergisi / Journal of Social Policy Conferences, no. 89 (January 2026). https://doi.org/10.26650/jspc.2025.89.1773834.
  • Cuesta, Albert. “La IA en el futur de les llengües europees no hegemòniques: oportunitats, desafiaments i estratègies de preservació.” Barcelona: Fundació Irla; Coppieters Foundation; Accent Obert, February 2026. PDF. ISBN 978-84-09-81831-0. https://irla.cat/wp-content/uploads/2026/01/estudi-ia-llengua-fundacioirla-coppietersfoundation-accentobert.pdf.
  • DeLaney, JR. “Deep Dive: The 1973 Lighthill Report: How One Mathematician Accidentally Triggered AI’s Dark Age.” AI Innovations Unleashed, February 4, 2026. https://www.aiinnovationsunleashed.com/deep-dive-the-1973-lighthill-report-how-one-mathematician-accidentally-triggered-ais-dark-age/.
  • Dockara, Tirupathi Rao. “Data Governance for Sustainable AI in Organizations: A Benchmarkability-First Capability Model, Evidence Map, and Marketplace Microdata Demonstration.” Research Square (preprint), January 30, 2026. https://doi.org/10.21203/rs.3.rs-8734900/v1.
  • Domin, Heather, Pradyumna Chari, Ramesh Raskar, and Grace Davin. “Economic and Systemic Considerations in Agentic Web Systems.” SSRN, January 15, 2026. https://doi.org/10.2139/ssrn.6078327.
  • Ford, Heather, Andrew Burrell, Monica Monin, Bhuva Narayan, and Suneel Jethani. “Hacking AI Chatbots for Critical AI Literacy in the Library.” Journal of the Australian Library and Information Association, published online February 4, 2026. https://doi.org/10.1080/24750158.2026.2614000.
  • Gordieiev, Oleksandr, Daria Gordieieva, Rainer Austen, Anatoliy Gorbenko, and Olga Tarasyuk. “Quality Assessment of Artificial Intelligence Systems: A Metric-Based Approach.” Electronics 15, no. 3 (2026): 691. https://doi.org/10.3390/electronics15030691.
  • Grimm, Robert. “Mapping the Stochastic Penal Colony.” arXiv preprint arXiv:2602.00033 [cs.CY] (January 18, 2026). https://doi.org/10.48550/arXiv.2602.00033.
  • Jones, Groucho. “Convergent Personality Representations in Large Language Models Evidence for a Theory of Latent Attractors.” SSRN Scholarly Paper, December 31, 2025. https://doi.org/10.2139/ssrn.5993074.
  • Lee, Sung Une, Harsha Perera, Yue Liu, Boming Xia, Qinghua Lu, Liming Zhu, Olivier Salvado, and Jon Whittle. “Responsible AI Question Bank for Risk Assessment.” ACM Computing Surveys (Just Accepted), published online January 29, 2026. https://doi.org/10.1145/3790096.
  • Saburov, Sergey. “Security and Risk Implications of Transformer-Based Large Language Models.” Preprint, Preprints.org, February 9, 2026. https://doi.org/10.20944/preprints202602.0680.v1.
  • Wall, Emily. “Next Steps in Research on Human Bias in Visual Data Analysis.” In Human Bias in Visual Data Analysis, 221–229. Cham: Springer, 2026. https://doi.org/10.1007/978-3-032-09307-3_8.
  • Xu, Wei, Zaifeng Gao, and Marvin J. Dainoff. “An HCAI Methodological Framework: Putting It into Action to Enable Human-Centered AI.” IEEE Transactions on Human-Machine Systems 56, no. 1 (February 2026): 78–94. https://doi.org/10.1109/THMS.2025.3631590.
  • Xu, Wei. “Human-Centered Artificial Intelligence (HCAI): Foundations and Approaches.” arXiv preprint arXiv:2601.01247 [cs.HC] (February 18, 2026; rev. from January 3, 2026). https://doi.org/10.48550/arXiv.2601.01247.
  • Zhou, Jianlong, and Fang Chen. “AI Ethics Operationalisation: Progress, Tools, and Opportunities.” SSRN (January 12, 2026). https://doi.org/10.2139/ssrn.6067748.

2025

  • Adepu, Pavan Kumar, and Sai Kumar Kalya. "Red Teaming as a Service (RTaaS) for Cloud-Hosted GenAI: A Responsible AI Perspective." (2025). https://www.researchgate.net/publication/399104917_Red_Teaming_as_a_Service_RTaaS_for_Cloud-Hosted_GenAI_A_Responsible_AI_Perspective.
  • Agarwal, Avinash, and Manisha J. Nene. “A Five-Layer Framework for AI Governance: Integrating Regulation, Standards, and Certification.” Transforming Government: People, Process and Policy 19, no. 3 (2025): 535–555. https://doi.org/10.1108/TG-03-2025-0065.
  • Agarwal, Avinash, and Manisha J. Nene. “Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India.” arXiv (2025). https://doi.org/10.48550/arXiv.2509.09508.
  • Aguilar Antonio, Juan Manuel. Uso de la inteligencia artificial por redes criminales de alto riesgo. París: Programa EL PACCTO 2.0 (Expertise France), September 2025. https://doi.org/10.5281/zenodo.16750778.
  • Amador-Lankster, Velmar. “Facial Recognition in Policing: How Algorithmic Bias Targets People of Color.” The Undergraduate Law Review at UC San Diego 3, no. 1 (2025). https://doi.org/10.5070/L3.47403.
  • Apeiron, Anastasia S., Davide Dell'Anna, Pradeep K. Murukannaiah, and Pınar Yolum. "Model and Mechanisms of Consent for Responsible Autonomy." In 24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025, pp. 133-141. International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2025. https://doi.org/10.5555/3709347.3743525.
  • Assuncao, Isadora. “The Collaborative Edge: Context-Specific Pathways to Responsible AI Development Through University-Industry Partnerships.” SSRN, January 22, 2025. https://doi.org/10.2139/ssrn.5297671.
  • Attard-Frost, Blair, and David Gray Widder. "The ethics of AI value chains." Big Data & Society 12, no. 2 (2025): 20539517251340603. https://doi.org/10.1177/20539517251340603.
  • Attard-Frost, Blair. "Transfeminist AI governance." arXiv preprint arXiv:2503.15682 (2025). https://doi.org/10.48550/arXiv.2503.15682.
  • Bahiru, Tadesse K., and Ioannis A. Kakadiaris. "Codecard: Leveraging LLMs to Evaluate AI Model Code Development with the System Cards Framework." (2025). https://par.nsf.gov/servlets/purl/10662105.
  • Bai, Bing. "Research on Risks and Governance Pathways of False and Harmful Information in the Application of Generative Artificial Intelligence." In 2025 5th International Conference on Artificial Intelligence, Big Data and Algorithms (CAIBDA), pp. 626-629. IEEE, 2025. https://doi.org/10.1109/CAIBDA65784.2025.11183077.
  • Ballantyne, Emily, Michael Pin-Chuan Lin, Daniel H Chang, and Eric Poitras. "Bridging educational equity gaps: expanding the CHAT-ACTS framework for personalized GenAI chatbots in higher education." Journal of Computing in Higher Education 37, no. 4 (2025): 1564-1589. https://doi.org/10.1007/s12528-025-09475-z.
  • Batool, Amna, Didar Zowghi, and Muneera Bano. "AI governance: a systematic literature review." AI and Ethics 5, no. 3 (2025): 3265-3279. https://doi.org/10.1007/s43681-024-00653-w.
  • Batool, Amna, Sunny Lee, Yue Liu, and Liming Dong. “The Anatomy of AI Policies: A Systematic Comparative Analysis of AI Policies across the Globe.” AI and Ethics 6, art. 55 (2026). Published December 10, 2025. https://doi.org/10.1007/s43681-025-00886-3.
  • Battineni, Gopi, Sharmin Nisar Chougule, Aman Kataria, and Lalit Mohan Goyal. "Navigating Ethics and Legalities in Artificial Intelligence: Challenges, Frameworks, and Future Directions." In Generative AI in Healthcare: Concepts, Methodologies, Tools, and Applications, pp. 293-315. Singapore: Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-95-2129-6_12.
  • Becerra, Sofia, Jiayan Xie, Clare Boulding, and Clara van Muiswinkel. "QueenMUN 2025 Harmful Content on Social Media Background Guide." https://queenmun.qmslife.com/wp-content/uploads/2025/04/QueenMUN-2025-Background-Guide-UNICEF.pdf.
  • Beiker, Sven A., Jonas Waidringer, and Chandadevi Giri. "Artificial Intelligence in Product Development and Innovation." In 2025 International Conference on Artificial Intelligence, Computer, Data Sciences and Applications (ACDSA), pp. 1-9. IEEE, 2025. https://doi.org/10.1109/ACDSA65407.2025.11165896.
  • Bhardwaj, Akhil, Jackson Nickerson, and Joseph T. Mahoney. “Organizations Beyond the Limit: The Role of Strategic Decisions in Industrial Disasters.” SSRN, October 22, 2025. https://doi.org/10.2139/ssrn.5644048.
  • Bhardwaj, Akhil. “Avoiding the Iron Cage of AI Technocracy Based on the Principle of Reversibility of Harm.” Conference paper, accepted September 9, 2025. University of Bath Research Portal. https://researchportal.bath.ac.uk/en/publications/avoiding-the-iron-cage-of-ai-technocracy-based-on-the-principle-o/.
  • Bikkasani, Dileesh Chandra. "Navigating artificial general intelligence (AGI): Societal implications, ethical considerations, and governance strategies." AI and Ethics 5, no. 3 (2025): 2021-2036. https://doi.org/10.1007/s43681-024-00642-z.
  • Bonnet, Severin, and Frank Teuteberg. "Unfolding the potential of generative artificial intelligence: Design principles for chatbots in academic teaching and research." International Journal of Knowledge Management (IJKM) 21, no. 1 (2025): 1-25. https://doi.org/10.4018/IJKM.368223.
  • Brennan, Andrea, Gwyneth Sutherlin, Lisa Pagano-Wallace, and Hermie Mendoza. "Finding Deepfakes: A Tabletop Exercise About AI, Decisionmaking, and Algorithmic Performance." Joint Force Quarterly 118, no. 3 (2025): 49-55. https://digitalcommons.ndu.edu/joint-force-quarterly/vol118/iss3/8/.
  • Bucknall, Ben, Saad Siddiqui, Lara Thurnherr, Conor McGurk, Ben Harack, Anka Reuel, Patricia Paskov et al. "In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?." In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, pp. 3148-3161. 2025. https://doi.org/10.1145/3715275.3732201.
  • Burton, Sharon L., and David P. Harvie. "Deepfakes: Unmasking the Technological, Societal, and Ethical Dimensions." RAIS Journal for Social Sciences 9, no. 2 (2025): 1-14. https://journal.rais.education/index.php/raiss/article/view/277.
  • Buselli, Irene. No Metric Is an Island: How Algorithmic Fairness Interacts with Other AI Properties. PhD diss., Università degli Studi di Genova, 2025. https://hdl.handle.net/20.500.14242/352707.
  • Campos-Castillo, Celeste, Xuan Kang, and Linnea I. Laestadius. "Perspectives on How Sociology Can Advance Theorizing about Human-Chatbot Interaction and Developing Chatbots for Social Good." arXiv preprint arXiv:2507.05030 (2025). https://doi.org/10.48550/arXiv.2507.05030.
  • Cao, Hongpeng, Yanbing Mao, Yihao Cai, Lui Sha, and Marco Caccamo. “Runtime Learning Machine.” OpenReview (submitted to ICLR 2025; first posted September 17, 2024; last modified February 5, 2025). https://openreview.net/forum?id=KCTHM2Ffh3.
  • Carvalho, Waydell. "Governing Self-Modifying AI: A Federal Framework for Runtime Safety." Available at SSRN 5392553 (2025). https://dx.doi.org/10.2139/ssrn.5392553.
  • Cascella, Marco, Mohammed Naveed Shariff, Omar Viswanath, Matteo Luigi Giuseppe Leoni, and Giustino Varrassi. "Ethical Considerations in the Use of Artificial Intelligence in Pain Medicine." Current Pain and Headache Reports 29, no. 1 (2025): 10. https://doi.org/10.1007/s11916-024-01330-7.
  • Cascella, Marco. "Basic Knowledge of AI for Clinicians." In Exploring AI in Pain Research and Management, pp. 5-24. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-78833-8_2.
  • Castañeira, Josu Eguiluz, Axel Brando, Migle Laukyte, and Marc Serra-Vidal. "Position Paper: If Innovation in AI Systematically Violates Fundamental Rights, Is It Innovation at All?" arXiv preprint arXiv:2511.00027 (2025). https://doi.org/10.48550/arXiv.2511.00027.
  • Çetin, Orçun, Baturay Birinci, Çağlar Uysal, and Budi Arief. "Exploring the Cybercrime Potential of LLMs: A Focus on Phishing and Malware Generation." In European Interdisciplinary Cybersecurity Conference, pp. 98-115. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-94855-8_7.
  • Chakraborti, Mahasweta, Bert Joseph Prestoza, Nicholas Vincent, Vladimir Filkov, and Seth Frey. "Responsible AI in the OSS: Reconciling Innovation with Risk Assessment and Disclosure." In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, vol. 8, no. 1, pp. 513-527. 2025. https://doi.org/10.1609/aies.v8i1.36567.
  • Chatzipanagiotis, Michael. "Incident Reporting and Investigation under the AI Act: Some Insights from Aviation." International Journal of Law and Information Technology, forthcoming (2025). https://dx.doi.org/10.2139/ssrn.5811603.
  • Chau, Bao Kham, and George He. "Audio deepfakes and the regulation of the landlords of creativity." In Cambridge Forum on AI: Law and Governance, vol. 1, p. e30. Cambridge University Press, 2025. https://doi.org/10.1017/cfl.2025.10011.
  • Chedalla, Anish Sai, Samina Ali, Jiuming Chen, and Eric Xia. "Turn-by-Turn Behavior Monitoring in LM-Guided Psychotherapy." In The 14th International Joint Conference on Natural Language Processing and The 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pp. 105-122. 2025. https://aclanthology.org/2025.ijcnlp-srw.10/.
  • Chen, Kevin, Saleh Afroogh, Abhejay Murali, David Atkinson, Amit Dhurandhar, and Junfeng Jiao. "LLM Harms: A Taxonomy and Discussion." arXiv preprint arXiv:2512.05929 (2025). https://doi.org/10.48550/arXiv.2512.05929.
  • Chen, Yian, and Lana Do. "Leveraging LLMs with Strategic Prompting." In Human-Computer Interaction: 10th Iberoamerican Conference, HCI-COLLAB 2024, Pereira, Colombia, June 4–7, 2024, Revised Selected Papers, p. 39. Springer Nature, 2025. https://doi.org/10.1007/978-3-031-91328-0_4.
  • Ciriello, Raffaele Fabio, Angelina Ying Chen, and Zara Annette Rubinsztein. "Compassionate AI Design, Governance, and Use." IEEE Transactions on Technology and Society (2025). https://doi.org/10.1109/TTS.2025.3538125.
  • Coester, Ursula, Dominik Adler, Christian Böttger, and Norbert Pohlmann. "Unintended Consequences of Large Language Models and Their Impact on Society." Electronic Communications of the EASST 84 (2025). https://doi.org/10.14279/eceasst.v84.2676.
  • Conklin, Sherri. "A Model for Using Ethical Theory to Specify Epistemic Goals for Explainable AI." In 2025 IEEE International Symposium on Ethics in Engineering, Science, and Technology (ETHICS), pp. 1-9. IEEE, 2025. https://doi.org/10.1109/ETHICS65148.2025.11098188.
  • Corbucci, Luca. Beyond Model Accuracy: Building Trustworthy Federated Learning Systems. PhD diss., Università degli Studi di Pisa, 2025. https://hdl.handle.net/20.500.14242/307962.
  • Dai, Jessica, Inioluwa Deborah Raji, Benjamin Recht, and Irene Y. Chen. "Aggregated Individual Reporting for Post-Deployment Evaluation." arXiv preprint arXiv:2506.18133 (2025). https://doi.org/10.48550/arXiv.2506.18133.
  • Del Castillo, Aída Ponce. "Governing AI at Work: Anticipating Technological Trends and the Role of Social Dialogue." Rethink, Regulate, Reimagine: 55-61. https://accoshop-assets-prod.s3.eu-west-1.amazonaws.com/content_download/9789464679694/9789464679694_AI%20Rethink%2C%20Regulate%2C%20Reimagine_downloadable.pdf#page=56.
  • Denecke, Kerstin, Guillermo Lopez-Campos, and Richard May. "The unintended harm of Artificial Intelligence (AI): exploring critical incidents of AI in healthcare." In MEDINFO 2025—Healthcare Smart× Medicine Deep, pp. 1013-1018. IOS Press, 2025. https://doi.org/10.3233/shti250992.
  • Doherty, Camille. “The Danger of Deepfakes: Information Pollution and Epistemic Insecurity.” Senior thesis (B.A.), Claremont McKenna College, December 2024. https://scholarship.claremont.edu/cmc_theses/3750.
  • Durand, Serge. "Over-Approximating Neural Networks for Verification, Robustness, and Explainability." PhD diss., Université Paris-Saclay, 2025. https://theses.hal.science/tel-05468098/.
  • Elamrani, Aïda. "Introduction to Artificial Consciousness: History, Current Trends and Ethical Challenges." arXiv preprint arXiv:2503.05823 (2025). https://doi.org/10.48550/arXiv.2503.05823.
  • Emory, Tara, and Maura R. Grossman. “A Primer on the Different Meanings of ‘Bias’ for Legal Practice.” SSRN, last revised July 17, 2025. https://doi.org/10.2139/ssrn.5356035.
  • Erukude, Sai Teja, Viswa Chaitanya Marella, and Suhasnadh Reddy Veluru. "AI-Driven Cybersecurity Threats: A Survey of Emerging Risks and Defensive Strategies." In International Conference on Data Science and Applications, pp. 185-197. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-032-10783-1_14.
  • Ezell, Carson, Xavier Roberts-Gaal, and Alan Chan. " Incident Analysis for AI Agents." In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, vol. 8, no. 1, pp. 865-878. 2025. https://doi.org/10.1609/aies.v8i1.36596.
  • Faba, José María Martín. "La inteligencia artificial en la nueva directiva de responsabilidad por los daños causados por productos defectuosos¿ realidad o expectativa?." Revista CESCO de Derecho de Consumo 53 (2025): 14-19. https://doi.org/10.18239/RCDC_2025.53.3648.
  • Faleiro, Felipe Reis. Jornalismo na era da IA: limites e possibilidades da inteligência artificial generativa em veículos jornalísticos. Master’s thesis, Programa de Pós-Graduação em Comunicação Social, Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS), Porto Alegre, 2025. https://repositorio.pucrs.br/dspace/bitstream/10923/27165/1/000510063-Texto%2Bcompleto-0.pdf.
  • Fan, Liang, and Menglei Wu. "Social Impacts, Risks, and Governance Framework of Generative Artificial Intelligence Applications." In 2025 International Conference on Algorithms, Software and Network Security (ASNS), pp. 41-48. IEEE, 2025. https://doi.org/10.1109/ASNS67347.2025.11346050.
  • Faustorilla Jr, John Francis, and Joseph Thomas Capistrano. "Establishing Responsible Artificial Intelligence-Use Among Healthcare Institutions Through ISO IEC 42001 Artificial Intelligence Management System." In Proceedings of Eighth International Conference on Information System Design and Intelligent Applications, pp. 355-366. Singapore: Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-9248-4_27.
  • Feng, Jing, Xiaolu Bai, Yunan Liu, and Christopher Michael Cunningham. "Proactive Remote Operation of Automated Vehicles: Supporting human controllability." In Handbook of Human-Centered Artificial Intelligence, pp. 1-47. Singapore: Springer Nature Singapore, 2025. https://link.springer.com/content/pdf/10.1007/978-981-97-8440-0_107-1.pdf.
  • Gao, Haoyu, Mansooreh Zahedi, Wenxin Jiang, Hong Yi Lin, James Davis, and Christoph Treude. "AI Safety in the Eyes of the Downstream Developer: A First Look at Concerns, Practices, and Challenges." arXiv preprint arXiv:2503.19444 (2025). https://doi.org/10.48550/arXiv.2503.19444.
  • Gengler, Eva, and Marco Wedel. “Ethical AI Through a Feminist Lens: Challenging Power and Redefining Technology.” In Handbook of Global Philosophies on AI Ethics: Toward Sustainable Futures, edited by Naresh Singh and Ram B. Ramachandran, 189–200. CRC Press, 2025. https://www.routledge.com/Handbook-of-Global-Philosophies-on-AI-Ethics-Toward-Sustainable-Futures/Singh-Ramachandran/p/book/9781032955643.
  • Ghaffar Nia, Nafiseh, Amin Amiri, and Adrienne Kline. "Bioethical Perspectives on Deployment of Large Language Model Agents: A Scoping Review." Authorea Preprints (2025). https://doi.org/10.36227/techrxiv.175979365.58528281/v1.
  • Ghaffar Nia, Nafiseh, Amin Amiri, Yuan Luo, and Adrienne Kline. “Ethical Perspectives on Deployment of Large Language Model Agents in Biomedicine: A Survey.” AI and Ethics 6, art. 32 (2026). Published December 4, 2025. https://doi.org/10.1007/s43681-025-00847-w.
  • Giannopoulos, Giorgos, and Dimitris Sacharidis. "Responsible AI." In Human-Centered AI: An Illustrated Scientific Quest, pp. 619-644. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-61375-3_12.
  • Gibson, Stephen, and Winston Tang. “Challenges in Assessing the Impacts of Regulation of Artificial Intelligence.” M-RCBG Associate Working Paper Series 2025.262. Cambridge, MA: Harvard Kennedy School, Mossavar-Rahmani Center for Business & Government, July 2025. https://dash.harvard.edu/handle/1/42718677.
  • Gibson, Stephen, and Winston Tang. Challenges in Assessing the Impacts of Regulation of Artificial Intelligence. London: Social Market Foundation, October 2025. https://www.smf.co.uk/wp-content/uploads/2025/10/Challenges-in-assessing-the-impacts-of-regulation-of-Artificial-Intelligence-Oct-25.pdf.
  • Gigante, Domenico. "A framework for mitigating Trustworthy AI issues in practice." (2025). https://ricerca.uniba.it/handle/11586/539808.
  • Girard-Chanudet, Camille. "Ground-truth is law: The invisible conceptual work behind AI." Big Data & Society 12, no. 2 (2025): 20539517251352823. https://doi.org/10.1177/20539517251352823.
  • Goldkind, Lauri, Joy Ming, and Alex Fink. "AI in the nonprofit human services: Distinguishing between hype, harm, and hope." Human Service Organizations: Management, Leadership & Governance 49, no. 3 (2025): 225-236. https://doi.org/10.1080/23303131.2024.2427459.
  • Golpayegani, Seyedeh Delaram S. Semantic Frameworks to Support the EU AI Act’s Risk Management and Documentation. PhD diss., Trinity College Dublin, the University of Dublin, 2025. https://www.tara.tcd.ie/bitstreams/6056f177-b210-4aa3-9abd-1dd67f669588/download.
  • Gölz, Paul, Nika Haghtalab, and Kunhe Yang. "Distortion of AI Alignment: Does Preference Optimization Optimize for Preferences?" arXiv preprint arXiv:2505.23749 (2025). https://doi.org/10.48550/arXiv.2505.23749.
  • Goutier, Marc, Christopher Diebel, Martin Adam, and Alexander Benlian. "Humans Over-rely On Help From Artificial Intelligence In Problem-Solving." (2025). https://aisel.aisnet.org/ecis2025/hci/hci/5/.
  • Gross, Nicole. "A powerful potion for a potent problem: transformative justice for generative AI in healthcare." AI and Ethics 5, no. 3 (2025): 2089-2101. https://doi.org/10.1007/s43681-024-00519-1.
  • Gunasekara, Lakshitha, Nicole El-Haber, Swati Nagpal, Harsha Moraliyage, Zafar Issadeen, Milos Manic, and Daswin De Silva. "A Systematic Review of Responsible Artificial Intelligence Principles and Practice." Applied System Innovation 8, no. 4 (2025): 97. https://doi.org/10.3390/asi8040097.
  • Gupta, Anchal, Gleb Papyshev, and James T. Kwok. "Bubble, Bubble, AI’s Rumble: Why Global Financial Regulatory Incident Re-Porting Is Our Shield Against Systemic Stumbles." In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, vol. 8, no. 2, pp. 1181-1193. 2025. https://doi.org/10.1609/aies.v8i2.36621.
  • Hadan, Hilda, Reza Hadi Mogavi, Leah Zhang-Kennedy, and Lennart E. Nacke. “Who Is Responsible When AI Fails? Mapping Causes, Entities, and Consequences of AI Privacy and Ethical Incidents.” Preprint submitted to International Journal of Human-Computer Interaction, August 21, 2025. https://doi.org/10.1080/10447318.2025.2549073.
  • Han, Dong Y. "Artificial Intelligence in and Beyond Healthcare Psychology." Journal of Clinical Psychology in Medical Settings 32, no. 4 (2025): 600-607. https://doi.org/10.1007/s10880-025-10101-4.
  • Hanschke, Vanessa Aisyahsari. This Is a Drill: Developing Contextual Methods for Responsible AI with Industry Practitioners. PhD diss., University of Bristol, 2025. https://research-information.bris.ac.uk/ws/portalfiles/portal/454534429/Thesis-Pure-Hanschke.pdf.
  • Harmaala, Rasmus Valtteri. Designing Trustworthy Algorithms: Operational Environment as a Cornerstone for Governing Risk and Enhancing Trust. Master’s thesis, Vaasan yliopisto (University of Vaasa), March 29, 2025. https://urn.fi/URN:NBN:fi-fe2025032922232.
  • He, Linxuan, Qing-Shan Jia, Ang Li, Hongyan Sang, Ling Wang, Jiwen Lu, Tao Zhang et al. "Towards provable probabilistic safety for scalable embodied AI systems." arXiv preprint arXiv:2506.05171 (2025). https://doi.org/10.48550/arXiv.2506.05171.
  • Hilbig, Tim, and Michael Schulz. “Applicability of Project Management Methods and Key Performance Indicators for Success-Oriented Control of AI Projects.” In ECIS 2025 Proceedings, General Track, Paper 10. Amman, Jordan, 2025. https://aisel.aisnet.org/ecis2025/general_track/general_track/10.
  • Hodel, Damian, and Lindah Kotut. "AI Has No Rights: from System-Based to Stakeholder-Based AI Governance." (2025). https://chi-staig.github.io/papers/submission6.pdf.
  • Hoffmann, Mia. The Mechanisms of AI Harm: Lessons Learned from AI Incidents. Center for Security and Emerging Technology (CSET), Georgetown University, October 2025. https://cset.georgetown.edu/wp-content/uploads/CSET-The-Mechanisms-of-AI-Harm.pdf.
  • Holzinger, Andreas, Luca Longo, Angelo Cangelosi, and Javier Del Ser. "Research Frontiers in Machine Learning & Knowledge Extraction." Machine Learning and Knowledge Extraction 8, no. 1 (2025): 6. https://doi.org/10.3390/make8010006.
  • Howell, Bronwyn. "WEIRD? Institutions and consumers’ perceptions of artificial intelligence in 31 countries." AI & Society 40, no. 6 (2025): 4409-4431. https://doi.org/10.1007/s00146-025-02217-w.
  • Ibrahim, Lujain, Katherine M. Collins, Sunnie SY Kim, Anka Reuel, Max Lamparth, Kevin Feng, Lama Ahmad et al. "Measuring and mitigating overreliance is necessary for building human-compatible AI." arXiv preprint arXiv:2509.08010 (2025). https://doi.org/10.48550/arXiv.2509.08010.
  • Jackson, Brian A., and David R. Frelinger. Valuing and Assessing Prevention and Preparedness for Potential Artificial Intelligence Disasters: Thinking Rationally About Artificial Intelligence–Caused Industrial Accidents, “9/11s,” Extinction Events, and Other Incidents. RAND Corporation, October 6, 2025. https://doi.org/10.7249/RRA4219-1.
  • Jelodar, Hamed, Samita Bai, Parisa Hamedi, Hesamodin Mohammadian, Roozbeh Razavi-Far, and Ali Ghorbani. "Large language model (llm) for software security: Code analysis, malware analysis, reverse engineering." arXiv preprint arXiv:2504.07137 (2025). https://doi.org/10.48550/arXiv.2504.07137.
  • Judd, Nick, Alexandre Vaz, Kevin Paeth, Layla Inés Davis, Milena Esherick, Jason Brand, Inês Amaro, and Tony Rousmaniere. "Independent Clinical Evaluation of General-Purpose LLM Responses to Signals of Suicide Risk." arXiv preprint arXiv:2510.27521 (2025). https://doi.org/10.48550/arXiv.2510.27521.
  • Kalaichandran, Amitha, Judy Perez, Shweta Chaukekar, Stephen Klasko, David Katz, and Daniel Lakoff. “Artificial Intelligence for Chronic Disease Prevention and Management: A Scoping Review of the Current Evidence, Challenges, and Future Directions.” SSRN, November 5, 2025. https://doi.org/10.2139/ssrn.5708362.
  • Kanepajs, Arturs, Aditi Basu, Sankalpa Ghose, Constance Li, Akshat Mehta, Ronak Mehta, Samuel David Tucker-Davis, Bob Fischer, and Jacy Reese Anthis. "What do Large Language Models Say About Animals? Investigating Risks of Animal Harm in Generated Text." In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, pp. 1387-1410. 2025. https://doi.org/10.1145/3715275.3732094.
  • Kangasmetsä, Pessi. Antropomorfisten järjestelmien tuotekehitys: Case Starship -kuljetusrobotit. Master’s thesis, University of Jyväskylä, Faculty of Information Technology, Master’s Degree Programme in Information Systems, 2025. https://urn.fi/URN:NBN:fi:jyu-202506165397.
  • Kaur, Dilly. "The Iatrogenic Algorithm: A Comprehensive Analysis of Systemic Failures in Artificial Intelligence for Mental Healthcare." Available at SSRN 6043056 (2025). https://dx.doi.org/10.2139/ssrn.6043056.
  • Keat, David Lau, Ganthan Narayana Samy, Fiza Abdul Rahim, Mahiswaran Selvanathan, Nurazean Maarop, Mugilraj Radha Krishnan, and Sundresan Perumal. "Responsible procurement of AI applications: a risk-based framework for Malaysian government agencies." Mathematical Sciences and Informatics Journal (MIJ) 6, no. 2 (2025): 114-131. https://ir.uitm.edu.my/id/eprint/128935/.
  • Kennedy, Ryan, Lydia Tiede, Amanda Austin, and Kenzy Ismael. "Law Enforcement and Legal Professionals’ Trust in Algorithms." Journal of Law & Empirical Analysis 2, no. 1 (2025): 77-96. https://doi.org/10.1177/2755323X251325594.
  • Khanday, Manzoor A., Nikita Negi, and Trushal Hirani. "AI in Financial Risk Management: Transformative Potential, Ethical Challenges, and Emerging Threats." In Artificial Intelligence for Financial Risk Management and Analysis, pp. 335-354. IGI Global Scientific Publishing, 2025. https://www.igi-global.com/chapter/ai-in-financial-risk-management/374373.
  • Kim, Jongkook. “The Paradox of AI Disrupting Law while Evading Legal Accountability: A Call for Strict Developer Liability in the Age of Digital Truth Destruction.” SSRN, September 28, 2025. https://doi.org/10.2139/ssrn.5540698.
  • Knight, Simon, Cormac McGrath, Olga Viberg, and Teresa Cerratto Pargman. "Learning about AI ethics from cases: a scoping review of AI incident repositories and cases." AI and Ethics 5, no. 3 (2025): 2037-2053. https://doi.org/10.1007/s43681-024-00639-8.
  • Kong, Yeqing. "From incident to insight: Understanding AI model lifecycle management through case analysis." Programmatic Perspectives 1, no. 1 (2025). https://programmaticperspectives.cptsc.org/index.php/jpp/article/view/118.
  • Kovačević, Ana. "AI Incidents and Data Integrity." In Artificial Intelligence Conference Book of Abstracts, Belgrade, October 9-10, 2025, pp. 33-34. 2025. https://link.springer.com/content/pdf/10.1007/978-981-97-8440-0_89-1.pdf.
  • Kuilman, Sietze Kai, Luciano Cavalcante Siebert, Stefan Buijsman, and Catholijn M. Jonker. "How to gain control and influence algorithms: contesting AI to find relevant reasons." AI and Ethics 5, no. 2 (2025): 1571-1581. https://doi.org/10.1007/s43681-024-00500-y.
  • Lang, Isabel. Künstliche Intelligenz und politischer Extremismus: Ein Überblick. essentials. Wiesbaden: Springer VS, 2025. https://doi.org/10.1007/978-3-658-49024-9.
  • Laufer, Benjamin, Jon Kleinberg, and Hoda Heidari. "The Backfiring Effect of Weak AI Safety Regulation." arXiv preprint arXiv:2503.20848 (2025). https://doi.org/10.48550/arXiv.2503.20848.
  • Le Jeune, Pierre, Jiaen Liu, Luca Rossi, and Matteo Dora. " RealHarm: A Collection of Real-World Language Model Application Failures." In Proceedings of the The First Workshop on LLM Security (LLMSEC), pp. 87-100. 2025. https://aclanthology.org/2025.llmsec-1.7/.
  • Ledford, Theodore Dreyfus. "Does artificial intelligence harm labour? Investigating the limitations of incident trackers as evidence for policymaking." Information Research an international electronic journal 30, no. iConf (2025): 486-499. https://doi.org/10.47989/ir30iConf47296.
  • Li, Chenxi, Yixun Lin, Xinyi Tu, Jing Chen, and Ziqi Zhao. "Synthesizing AI Failure Research: A Scoping Review: C. Li et al." Business & Information Systems Engineering (2025): 1-20. https://doi.org/10.1007/s12599-025-00970-2.
  • Lior, Anat. "E/Insuring the AI Age: Empirical Insights into Artificial Intelligence Liability Policies 31 Conn. Ins. LJ (forthcoming, 2025)." (2025). https://dx.doi.org/10.2139/ssrn.5316376.
  • Lu, You, Dingji Wang, Kaifeng Huang, Bihuan Chen, and Xin Peng. "TigAug: Data Augmentation for Testing Traffic Light Detection in Autonomous Driving Systems." arXiv preprint arXiv:2507.05932 (2025). https://doi.org/10.48550/arXiv.2507.05932.
  • Mao, Yanbing, Yihao Cai, and Lui Sha. “Real-DRL: Teach and Learn at Runtime.” NeurIPS 2025 (poster). OpenReview. Published September 18, 2025; last modified December 10, 2025. https://openreview.net/forum?id=gXZlZAeqay.
  • Mao, Yanbing, Yihao Cai, and Lui Sha. "Real-DRL: Teach and Learn in Reality." arXiv preprint arXiv:2511.00112 (2025). https://doi.org/10.48550/arXiv.2511.00112.
  • Marotta, Angelica, and Stuart Madnick. "The UN cybercrime treaty and AI: Navigating the intersection of technology and global policy." Issues in Information Systems 26, no. 4 (2025). https://doi.org/10.48009/4_iis_2025_101.
  • Martín Faba, José María. "La Inteligencia Artificial en la nueva Directiva de responsabilidad por los daños causados por productos defectuosos¿ realidad o expectativa?" (2025). https://doi.org/10.18239/RCDC_2025.53.3648.
  • McGrath, Quintin P., Alan R. Hevner, and Gert-Jan de Vreede. "Designing an enhanced enterprise risk management system to mitigate ethical risks of artificial intelligence applications." IEEE Transactions on Engineering Management (2025). https://doi.org/10.1109/TEM.2025.3565221.
  • McGregor, Sean, Allyson Ettinger, Nick Judd, Paul Albee, Liwei Jiang, Kavel Rao, William H. Smith, Shayne Longpre, Avijit Ghosh, Christopher Fiorelli, Michelle Hoang, Sven Cattell, and Nouha Dziri. “To Err Is AI: A Case Study Informing LLM Flaw Reporting Practices.” Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 28 (April 11, 2025): 28938–28945. https://doi.org/10.1609/aaai.v39i28.35162.
  • Minchenko, E. A. “Analysis of Statistics of Incidents Involving Artificial Intelligence in the AI Incident Database.” Vestnik nauki 4, no. 6 (87) (2025): 1224–1230. (Минченко, Е. А. «Анализ статистики инцидентов с участием искусственного интеллекта базы данных AI Incident Database». Вестник науки 4, № 6 (87) (2025): 1224–1230.) https://cyberleninka.ru/article/n/analiz-statistiki-intsidentov-s-uchastiem-iskusstvennogo-intellekta-bazy-dannyh-ai-incident-database.
  • Mondragón, José J. Piña and Angelina Espejel-Trujillo. "Towards a Harmonized Regulation of Artificial Intelligence: An Analysis of Its Main Implications and Risks." Cureus Journals 2, no. 1 (2025). https://doi.org/10.7759/s44389-025-07546-x.
  • Monga, Kapila. “Securing AI/ML Systems.” In AI/ML for Healthcare: Navigating the AI/ML Maze Responsibly, Securely, and Sustainably, 133–161. New York: Chapman and Hall/CRC, 2025. https://doi.org/10.1201/9781003454663-5.
  • Morales, Sergio, Robert Clarisó, and Jordi Cabot. "ImageBiTe: A Framework for Evaluating Representational Harms in Text-to-Image Models." In 2025 IEEE/ACM 4th International Conference on AI Engineering–Software Engineering for AI (CAIN), pp. 95-106. IEEE, 2025. https://doi.org/10.1109/CAIN66642.2025.00019.
  • Moreno, Michael, and Susan Ariel Aaronson. "Do AI Chatbot Firms Practice What They Preach?" In Proceedings of the AAAI Symposium Series, vol. 7, no. 1, pp. 54-62. 2025. https://doi.org/10.1609/aaaiss.v7i1.36867.
  • Muhammad, Aoun E., Kin Choong Yow, Jamel Baili, Yongwon Cho, and Yunyoung Nam. "CORTEX: Composite Overlay for Risk Tiering and Exposure in Operational AI Systems." arXiv preprint arXiv:2508.19281 (2025). https://doi.org/10.48550/arXiv.2508.19281.
  • Mushkani, Rashid. "Measuring What Matters: The AI Pluralism Index." arXiv preprint arXiv:2510.08193 (2025). https://doi.org/10.48550/arXiv.2510.08193.
  • Mushkani, Rashid. "Right-to-Override for Critical Urban Control Systems: A Deliberative Audit Method for Buildings, Power, and Transport." arXiv preprint arXiv:2509.13369 (2025). https://doi.org/10.48550/arXiv.2509.13369.
  • Namiot, Dmitry. "Artificial Intelligence in Cybersecurity. Chronicle. Issue 3." International Journal of Open Information Technologies 13, no. 11 (2025): 169-179. http://www.injoit.org/index.php/j1/article/view/2330.
  • Nerantzi, Elina, and Giovanni Sartor. "Crimes without criminals: in search of criminal liability for harms caused by AI systems." In Research Handbook on the Law of Artificial Intelligence, pp. 329-348. Edward Elgar Publishing, 2025. https://doi.org/10.4337/9781035316496.00024.
  • Nikolich, Anita. "AI Red Teaming for Elections." In PROMISE–PROMoting AI’s Safe usage for Elections, pp. 13-27. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-89853-2_2.
  • Nikolova-Minkova, Ventsislava. “Historical Development of Technologies in the Field of Artificial Intelligence.” Environment. Technology. Resources. Proceedings of the International Scientific and Practical Conference 2 (June 2025): 243–50. https://doi.org/10.17770/etr2025vol2.8609.
  • Ochasi, Aloysius, Abdoul Jalil Djiberou Mahamadou, Russ B. Altman, and Levi UC Nkwocha. "Reframing Justice in Healthcare AI: An Ubuntu‐Based Approach for Africa." Developing World Bioethics (2025). https://doi.org/10.1111/dewb.70007.
  • Paeth, Kevin, and Sean McGregor. "AI Risk, Safety, and Incident Reporting." In Handbook of Human-Centered Artificial Intelligence, pp. 1-39. Singapore: Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-97-8440-0_89-1.
  • Paeth, Kevin, Daniel Atherton, Nikiforos Pittaras, Heather Frase, and Sean McGregor. “Lessons for editors of AI incidents from the AI incident database.” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 28, pp. 28946-28953. 2025. https://doi.org/10.1609/aaai.v39i28.35163.
  • Patni, Jatin. “Whose Fairness? Challenges in Building a Global Framework for AI Fairness.” In Shaping U.S.-India A.I. Cooperation: Insights from the Inaugural U.S.-India A.I. Fellowship Program, edited by Andreas Kuehn and Anulekha Nandi, 203–223. New Delhi: Observer Research Foundation, September 2025. https://static1.squarespace.com/static/5ca0ec9b809d8e4c67c27b3a/t/68d19607be24d35b2290257a/1758565895537/Whose+Fairness%3F+Challenges+in+Building+a+Global+Framework+for+AI+Fairness.pdf.
  • Patti, Vittoria. Delegated Destruction: Artificial Intelligence in Warfare, the Accountability Gap and the Crisis of Global Governance. Thesis, Utrecht University, 2025. https://studenttheses.uu.nl/handle/20.500.12932/50039.
  • Peretti, Sara, Federica Caruso, Giacomo Valente, Luigi Pomante, and Tania Di Mascio. "Educating artificial intelligence following the child learning development trajectories." Behaviour & Information Technology (2025): 1-17. https://doi.org/10.1080/0144929X.2025.2455390.
  • Pi, Yulu, and Maddie Proctor. "Toward empowering AI governance with redress mechanisms." In Cambridge Forum on AI: Law and Governance, vol. 1, p. e24. Cambridge University Press, 2025. https://doi.org/10.1017/cfl.2025.9.
  • Pietri, Marcello, Marco Mamei, and Michele Colajanni. "Telecom spam and scams in the 5G and artificial intelligence era: analyzing economic implications, technical challenges and global regulatory efforts." International Journal of Information Security 24, no. 3 (2025): 139. https://doi.org/10.1007/s10207-025-01062-8.
  • Piorkowski, David, Michael Hind, John Richards, and Jacquelyn Martino. "Developing a Risk Identification Framework for Foundation Model Uses." arXiv preprint arXiv:2506.02066 (2025). https://doi.org/10.48550/arXiv.2506.02066.
  • Płachecka, Magdalena. “Chatboty jako współczesna forma narzędzi działalności przestępczej.” Przegląd Policyjny 159, no. 3 (2025): 89–109. https://doi.org/10.5604/01.3001.0055.3390.
  • Raghupathi, Wullianallur, Aditya Saharia, and Tanush Kulkarni. "Identifying Key Issues in Artificial Intelligence Litigation: A Machine Learning Text Analytic Approach." Applied Sciences 16, no. 1 (2025): 235. https://doi.org/10.3390/app16010235.
  • Raman, Vyoma, Judy Hanwen Shen, Andy K. Zhang, Lindsey Gailmard, Rishi Bommasani, Daniel E. Ho, and Angelina Wang. "Disclosure and Evaluation as Fairness Interventions for General-Purpose AI." In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, vol. 8, no. 3, pp. 2121-2135. 2025. https://doi.org/10.1609/aies.v8i3.36700.
  • Rao, Pooja SB, Laxminarayen Nagarajan Venkatesan, Mauro Cherubini, and Dinesh Babu Jayagopi. "Invisible Filters: Cultural Bias in Hiring Evaluations Using Large Language Models." In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, vol. 8, no. 3, pp. 2164-2176. 2025. https://doi.org/10.1609/aies.v8i3.36703.
  • Rao, Pooja SB, Sanja Šćepanović, Dinesh Babu Jayagopi, Mauro Cherubini, and Daniele Quercia. "The AI Model Risk Catalog: What Developers and Researchers Miss About Real-World AI Harms." In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, vol. 8, no. 3, pp. 2150-2163. 2025. https://doi.org/10.1609/aies.v8i3.36702.
  • Rao, Pooja SB, Sanja Šćepanović, Ke Zhou, Edyta Paulina Bogucka, and Daniele Quercia. "RiskRAG: A Data-Driven Solution for Improved AI Model Risk Reporting." In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pp. 1-26. 2025. https://doi.org/10.1145/3706598.3713979.
  • Raza, Shaina, Rizwan Qureshi, Anam Zahid, Safiullah Kamawal, Ferhat Sadak, Joseph Fioresi, Muhammaed Saeed, Ranjan Sapkota, Aditya Jain, Anas Zafar, Muneeb Ul Hassan, Aizan Zafar, Hasan Maqbool, Ashmal Vayani, Jia Wu, and Maged Shoman. “Who is Responsible? The Data, Models, Users or Regulations? A Comprehensive Survey on Responsible Generative AI for a Sustainable Future.” arXiv, last revised September 24, 2025. https://doi.org/10.48550/arXiv.2502.08650.
  • Reyero Lobo, Paula. Addressing Bias in Hate Speech Detection: Enhancing Target Group Identification with Semantics. PhD diss., The Open University, 2025. https://doi.org/10.21954/ou.ro.00102419.
  • Richards, Isabel, Claire Benn, and Miri Zilka. "From Incidents to Insights: Patterns of Responsibility Following AI Harms." In Proceedings of the 5th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, pp. 151-169. 2025. https://doi.org/10.1145/3757887.3763018.
  • Roodsari, Mahboobe Sadeghipour, Vincent Meyers, and Mehdi Tahoori. "Lightweight Concurrent Out-of-Distribution Detection in Hyperdimensional Computing Hardware." In 2025 IEEE 31st International Symposium on On-Line Testing and Robust System Design (IOLTS), pp. 1-7. IEEE, 2025. https://doi.org/10.1109/IOLTS65288.2025.11116825.
  • Roslander, Eero. “Osakeyhtiön hallituksen päätöksenteko ja tekoäly: Agenttiongelma ja fidusiaariset velvollisuudet kehittyvässä päätöksentekoprosessissa” [Limited-liability company board decision-making and artificial intelligence: The agency problem and fiduciary duties in evolving decision-making processes]. Master’s thesis (LL.M.), University of Helsinki, Faculty of Law, 2025. https://helda.helsinki.fi/server/api/core/bitstreams/41e56027-aba3-4617-a5e3-28f3af9f0d0a/content.
  • Roundtree, Aimee Kendall. "Facial Recognition AI Incident Reporting: An NLP Analysis." In International Conference on Human-Computer Interaction, pp. 214-225. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-032-13187-4_15.
  • Russo, Diego, Gian Marco Orlando, Valerio La Gatta, and Vincenzo Moscato. “Automating AI Failure Tracking: Semantic Association of Reports in AI Incident Database.” arXiv preprint arXiv:2507.23669 (2025). https://doi.org/10.48550/arXiv.2507.23669.
  • Sánchez, Gustavo, Ghada Elbez, and Veit Hagenmeyer. "A Global Analysis of Cyber Threats to the Energy Sector:" Currents of Conflict" from a Geopolitical Perspective." arXiv preprint arXiv:2509.22280 (2025). https://doi.org/10.48550/arXiv.2509.22280.
  • Sandhaus, Hauke, Angel Hsing-Chi Hwang, Wendy Ju, and Qian Yang. "My Precious Crash Data: Barriers and Opportunities in Encouraging Autonomous Driving Companies to Share Safety-Critical Data." Proceedings of the ACM on Human-Computer Interaction 9, no. 7 (2025): 1-21. https://doi.org/10.1145/3757493.
  • Schrijer, Bilge Bingöl. “Yapay Zekâ ile Desteklenen Eğitim ve Yetenek Keşfi Programları: Çocuk Haklarının Korunmasında Boşluklar.” In Türk Hukukunda Çocuk Sempozyumu: 28–29–30 Nisan 2025 (Hukuk Haftası), edited by İhsan Erdoğan, 96–100. Başkent Üniversitesi Geliştirme Vakfı İktisadi İşletmesi, October 2025. https://hukuk.baskent.edu.tr/kw/upload/246/dosyalar/T%C3%BCrk%20Hukukunda%20%C3%87ocuk%20Sempozyumu%281%29.pdf.
  • Selvadurai, Niloufer. "Seeing the full picture: an integrated regulatory model for AI." International Journal of Law and Information Technology 33 (2025): eaaf005. https://doi.org/10.1093/ijlit/eaaf005.
  • Sewell, Destynie, and Janine S. Hiller. “Presumptions of AI Malfunction.” SSRN, last revised March 25, 2025. https://doi.org/10.2139/ssrn.5143853.
  • Shafqat, Zunaira, Atif Aftab Jilani, Nigar Azhar Butt, Shafiq Ur Rehman, Muhammad Usman Khalid, and Volker Gruhn. "FairPrompt: Efficient Multi-Objective Prompt Optimization for Fairness Testing in Conversational AI Systems." IEEE Access (2025). https://doi.org/10.1109/ACCESS.2025.3618868.
  • Shams, Rifat Ara, Didar Zowghi, and Muneera Bano. "AI for All: Identifying AI incidents Related to Diversity and Inclusion." Journal of Artificial Intelligence Research 83 (2025). https://doi.org/10.1613/jair.1.17806.
  • Sharevski, Filipo, Jennifer Vander Loop, Bill Evans, and Alexander Ponticello. "‘You Creep! It Really Worked!’: An Empirical Study of Telephone Scams with Cloned Familiar Voices and Trusted Caller IDs." In Proceedings of the 2025 New Security Paradigms Workshop, pp. 92-107. 2025. https://doi.org/10.1145/3774761.3774918.
  • Shen, Louisa. "Not the machine’s fault: taxonomising AI failure as computational (mis)use." AI & Society 40, no. 8 (2025): 5793-5807. https://doi.org/10.1007/s00146-025-02333-7.
  • Shi, Yike, Qing Xiao, Qing Hu, Hong Shen, and Hua Shen. "The Siren Song of LLMs: How Users Perceive and Respond to Dark Patterns in Large Language Models." arXiv preprint arXiv:2509.10830 (2025). https://doi.org/10.1145/3772318.3791149.
  • Shiri Harzevili, Nima. "Improving the Reliability of AI Infrastructure Software with Data-Driven Software Analytics." (2025). https://yorkspace.library.yorku.ca/items/8883c6e1-451a-480b-b35f-a860a3362eae.
  • Shumate, J. Nicholas, Eden Rozenblit, Matthew Flathers, Carlos A. Larrauri, Christine Hau, Winna Xia, E. Nicholas Torous, and John Torous. "Governing AI in mental health: 50-state legislative review." JMIR Mental Health 12 (2025): e80739. https://doi.org/10.2196/80739.
  • Silva, Jhessica, Diego AB Moreira, Gabriel O. dos Santos, Alef Ferreira, Helena Maia, Sandra Avila, and Helio Pedrini. "Evaluation of AI Ethics Tools in Language Models: A Developers' Perspective Case Study." arXiv preprint arXiv:2512.15791 (2025). https://doi.org/10.48550/arXiv.2512.15791.
  • Şimşek, Can, and Ayse Gizem Yasar. "From Rejection to Regulation: Mapping the Landscape of AI Resistance." Available at SSRN 5287068 (2025). https://dx.doi.org/10.2139/ssrn.5287068.
  • Simson, Jan, Fiona Draxler, Samuel Mehr, and Christoph Kern. "Preventing Harmful Data Practices by using Participatory Input to Navigate the Machine Learning Multiverse." In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pp. 1-30. 2025. https://doi.org/10.1145/3706598.3713482.
  • Singh, Vijaypal Rathor. "Navigating the complexities: AI, security, and liberty at the border." In Artificial Intelligence for Cyber Security and Industry 4.0, pp. 250-272. CRC Press. https://www.taylorfrancis.com/chapters/edit/10.1201/9781032657264-11/navigating-complexities-vijaypal-rathor-singh.
  • Sioumalas-Christodoulou, Konstantinos, and Aristotle Tympas. "AI metrics and policymaking: assumptions and challenges in the shaping of AI." AI & SOCIETY 40, no. 6 (2025): 4655-4670. https://doi.org/10.1007/s00146-025-02181-5.
  • Slater, David. Monitoring Intelligent Systems for Surprises: A FRAM-Based Framework for Management, Learning, and Resilient Anticipation in Complex AI Systems. FRAMSynt, October 10, 2024. Revised August 5, 2025. https://d1wqtxts1xzle7.cloudfront.net/124135386/No_surprises_AI_ds_1-libre.pdf.
  • Souza, Patrícia Campos Guimarães de. O novo rosto da discriminação: racismo algorítmico e a invisibilidade do povo negro na inteligência artificial [The New Face of Discrimination: Algorithmic Racism and the Invisibility of Black People in Artificial Intelligence]. Dissertação [Master’s thesis], Universidade Católica de Brasília, Brasília, November 28, 2025. https://bdtd.ucb.br:8443/jspui/handle/tede/3784.
  • Sychov, Sergey V., and Ursula Podosenin. "Ensuring Accountability in AI Decision-Making." Ethics in the Age of AI: Navigating Politics and Security (2025): 291-315. https://www.google.com/books/edition/Ethics_in_the_Age_of_AI/SzZfEQAAQBAJ.
  • Tansri, Farrah Faustine, Nadine Monem, and Lee Weinberg. "Authenticity in Biased Diversity: Investigating the Language of Prompt Performances in AI Image Generators." Journal of Aesthetics, Creativity and Art Management 4, no. 1 (2025): 75-101. https://jurnal2.isi-dps.ac.id/index.php/jacam/article/view/5414.
  • Tudoroiu, Theodor. "The Global Civilization Initiative and the Global Artificial Intelligence Governance Initiative." In China’s New Global Initiatives, pp. 187-253. Singapore: Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-8422-9_5.
  • Uddin, Rehman, Sana Gul, Muhammad Huzaifa Bin Salih, and Noor ul Huda. “Artificial Intelligence and the Fight Against Misinformation: Analyzing the Role of Digital Media in Shaping Public Perceptions in Pakistan.” SSRN, August 18, 2025. https://doi.org/10.2139/ssrn.5412557.
  • van Kolfschooten, Hannah. "Towards an EU Charter of Digital Patients’ Rights in the Age of Artificial Intelligence." Digital Society 4, no. 1 (2025): 6. https://doi.org/10.1007/s44206-025-00159-w.
  • Venkatasubramanian, Krishna, Haven Hardie, and Tina-Marie Ranalli. "Toward a taxonomy of negative outcomes from the use of AI-driven systems for people with disabilities." In Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 1-18. 2025. https://doi.org/10.1145/3663547.3746359.
  • von Wendt, Karl-Ludwig. “Wie wir die Kontrolle über unsere Zukunft behalten” [“How We Keep Control Over Our Future”]. In Zukunftsgestalter Deutschlands: Pioniergeschichten aus Wirtschaft, Wissenschaft, Politik und Zivilgesellschaft, 317–327. Berlin and Heidelberg: Springer Berlin Heidelberg, 2025. https://doi.org/10.1007/978-3-662-70324-3_18.
  • Wang, Dingji, You Lu, Bihuan Chen, Shuo Hao, Haowen Jiang, Yifan Tian, and Xin Peng. "Argus: Resilience-Oriented Safety Assurance Framework for End-to-End ADSs." arXiv preprint arXiv:2511.09032 (2025). https://doi.org/10.48550/arXiv.2511.09032.
  • Weebadu Arachchige, Piyumi Malkisara Weeraarachchi. "A framework for cyber threat intelligence sharing focused on AI vulnerabilities." Master's thesis, PMW Weebadu Arachchige, 2025. https://urn.fi/URN:NBN:fi:oulu-202506194835.
  • Wilf-Townsend, Daniel. “Artificial Intelligence and Aggregate Litigation.” SSRN, March 1, 2025. https://doi.org/10.2139/ssrn.5163640.
  • Wodi, Alex. “AI Governance: A Necessity and Imperative.” SSRN Scholarly Paper, September 14, 2025. https://doi.org/10.2139/ssrn.5486407.
  • Wu, Linwan, Ertan Ağaoğlu, and Yuan Sun. "Artificial intelligence in advertising: unveiling challenges and opportunities via the consumer's lens." Handbook of Innovations in Strategic Communication: AI, Futurism and Directions (2025): 309. https://www.google.com/books/edition/Handbook_of_Innovations_in_Strategic_Com/M7yREQAAQBAJ.
  • Wulff, Peter, Marcus Kubsch, and Christina Krist. “Basics of Machine Learning.” In Applying Machine Learning in Science Education Research, edited by Peter Wulff, Marcus Kubsch, and Christina Krist, 15–48. Cham: Springer, 2025. https://doi.org/10.1007/978-3-031-74227-9_2.
  • Xu, Wei. "A User Experience 3.0 (UX 3.0) Paradigm Framework: Designing for Human-Centered AI Experiences." arXiv preprint arXiv:2506.23116 (2025). https://doi.org/10.48550/arXiv.2506.23116.
  • Xu, Wei. "Human-Centered Human-AI Interaction (HC-HAII): A Human-Centered AI Perspective." arXiv preprint arXiv:2508.03969 (2025). https://doi.org/10.48550/arXiv.2508.03969.
  • Yeung, Karen. "Can risks to fundamental rights arising from AI systems be 'managed' alongside health and safety risks? Implementing Article 9 of the EU AI Act." Implementing Article 9 (2025). https://dx.doi.org/10.2139/ssrn.5560783.
  • Yu, Yaman, Yiren Liu, Jacky Zhang, Yun Huang, and Yang Wang. "Understanding Generative AI Risks for Youth: A Taxonomy Based on Empirical Data." arXiv preprint arXiv:2502.16383 (2025). https://doi.org/10.48550/arXiv.2502.16383.
  • Yu, Yaman, Yiren Liu, Jacky Zhang, Yun Huang, and Yang Wang. “Youth-Centered GAI Risks (YAIR): A Taxonomy of Generative AI Risks from Empirical Data.” In Proceedings of the Twenty-First Symposium on Usable Privacy and Security (SOUPS 2025), 149–165. Berkeley, CA: USENIX Association, 2025. https://www.usenix.org/system/files/soups2025-yu.pdf.
  • Zampini, Stefano. Knowledge-Informed Machine Learning for Industrial Application. PhD diss., Politecnico di Torino, 2025. https://tesidottorato.depositolegale.it/bitstream/20.500.14242/355170/1/conv_zampini_phd_thesis.pdf.
  • Zeng, Yi, Enmeng Lu, Xiaoyang Guo, Cunqing Huangfu, Jiawei Xie, Yu Chen, Zhengqi Wang et al. "AI Governance InternationaL Evaluation Index (AGILE Index) 2025." arXiv preprint arXiv:2507.11546 (2025). https://doi.org/10.48550/arXiv.2507.11546.
  • Zhang, Qingjie, Di Wang, Haoting Qian, Liu Yan, Tianwei Zhang, Ke Xu, Qi Li, Minlie Huang, Hewu Li, and Han Qiu. "Speculating LLMs’ Chinese Training Data Pollution from Their Tokens." In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pp. 26124-26144. 2025. https://doi.org/10.18653/v1/2025.emnlp-main.1327.
  • Zhang, Yazhuo, Jinqing Cai, Avani Wildani, and Ana Klimovic. "Rethinking Web Cache Design for the AI Era." In Proceedings of the 2025 ACM Symposium on Cloud Computing, pp. 535-542. 2025. https://doi.org/10.1145/3772052.3772255.
  • Zhao, Yijun, Zhengke Li, Yicheng Wang, Xueyan Cai, Xiaojing Zhou, Yifan Yan, Kecheng Jin et al. "DreamDirector: Designing a Generative AI System to Aid Therapists in Treating Clients' Nightmares." In Proceedings of the 30th International Conference on Intelligent User Interfaces, pp. 553-578. 2025. https://doi.org/10.1145/3708359.3712078.
  • Zhao, Yiling, Audrey Michal, Nithum Thain, and Hari Subramonyam. "Thinking Like a Scientist: Can Interactive Simulations Foster Critical AI Literacy?." In International Conference on Artificial Intelligence in Education, pp. 60-74. Cham: Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-98417-4_5.
  • Zheng, Simin, Jared M. Clark, Fatemeh Salboukh, Priscila Silva, Karen da Mata, Fenglian Pan, Jie Min et al. " Bridging the Data Gap in AI Reliability Research and Establishing DR-AIR, a Comprehensive Data Repository for AI Reliability." arXiv preprint arXiv:2502.12386 (2025). https://doi.org/10.48550/arXiv.2502.12386.
  • Zheng, Simin, Jared M. Clark, Fatemeh Salboukh, Priscila Silva, Karen da Mata, Fenglian Pan, Jie Min et al. "DR-AIR: A data repository bridging the research gap in AI reliability." Quality Engineering (2025): 1-22. https://doi.org/10.1080/08982112.2025.2539834.
  • Zorkóczy, Miklós. "AI és Legaltech a jogászi munkában." (2025). https://publikacio.ppke.hu/id/eprint/3344/1/Zorkoczy_AI_es_Legaltech_a_jogaszi_munkaban.pdf.

2024

  • Abercrombie, Gavin, Djalel Benbouzid, Paolo Giudici, Delaram Golpayegani, Julio Hernandez, Pierre Noro, Harshvardhan Pandit, Eva Paraschou, Charlie Pownall, Jyoti Prajapati, Mark A. Sayre, Ushnish Sengupta, Arthit Suriyawongkul, Ruby Thelot, Sofia Vei, and Laura Waltersdorfer. "A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms." arXiv, last revised November 9, 2024.
  • Abu Zaid, Faried, Daniel Neider, and Mustafa Yalçıner. "VeriFlow: Modeling Distributions for Neural Network Verification." arXiv, June 20, 2024.
  • Agarwal, Avinash, and Manisha Nene. “Advancing Trustworthy AI for Sustainable Development: Recommendations for Standardising AI Incident Reporting.” In ITU Kaleidoscope 2024 (New Delhi, India), 1–8. 2024. https://doi.org/10.23919/ITUK62727.2024.10772925.
  • Agarwal, Avinash, and Manisha J. Nene. "Addressing AI Risks in Critical Infrastructure: Formalising the AI Incident Reporting Process." Paper presented at the 2024 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, July 12--14, 2024. Published September 20, 2024.
  • Agarwal, Avinash, and Manisha J. Nene. "Standardised Schema and Taxonomy for AI Incident Databases in Critical Digital Infrastructure." Paper presented at the 2024 IEEE Pune Section International Conference (PuneCon), Pune, India, December 13--15, 2024. Published February 27, 2025.
  • Allaham, Mowafak, and Nicholas Diakopoulos. "Evaluating the Capabilities of LLMs for Supporting Anticipatory Impact Assessment." arXiv, May 20, 2024.
  • All Party Parliamentary Group for Fair Elections. Free But Not Fair: British Elections and How to Restore Trust in Politics. November 25, 2024.
  • Anandayuvaraj, Dharun, Matthew Campbell, Arav Tewari, and James C. Davis. "FAIL: Analyzing Software Failures from the News Using LLMs." In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering (ASE '24), 506--18. New York: Association for Computing Machinery, 2024.
  • Bach, Tita A., Jenny K. Kristiansen, Aleksandar Babic, and Alon Jacovi. "Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review." IEEE Access 12 (August 1, 2024): 106385--106414.
  • Baeza-Yates, Ricardo, and Usama M. Fayyad. "Responsible AI: An Urgent Mandate." IEEE Intelligent Systems 39, no. 1 (January-February 2024): 12--17.
  • Batool, Amna, Didar Zowghi, and Muneera Bano. "AI Governance: A Systematic Literature Review." Research Square, July 24, 2024.
  • Bender, Emily M., and Alvin Grissom II. "Power Shift: Toward Inclusive Natural Language Processing." In Inclusion in Linguistics, edited by Anne H. Charity Hudley, Christine Mallinson, and Mary Bucholtz, 199--221. Oxford: Oxford University Press, 2024.
  • Bérastégui, Pierre. Artificial Intelligence in Industry 4.0: Implications for Occupational Safety and Health. Report 2024.01. Brussels: European Trade Union Institute (ETUI), January 2024.
  • Biecek, Przemyslaw, and Wojciech Samek. "Position: Explain to Question Not to Justify." arXiv, June 28, 2024.
  • Bieringer, Lukas, Kevin Paeth, Jochen Stängler, Andreas Wespi, Alexandre Alahi, and Kathrin Grosse. Position: A Taxonomy for Reporting and Describing AI Security Incidents. First submitted December 19, 2024. Revised February 26, 2025. arXiv preprint arXiv:2412.14855 [cs.CR].
  • Bikkasani, Dileesh Chandra. "Navigating Artificial General Intelligence (AGI): Societal Implications, Ethical Considerations, and Governance Strategies." AI and Ethics, published December 17, 2024.
  • Birkstedt, Teemu. Governing Artificial Intelligence: From Ethical Principles Toward Organizational AI Governance Practices. Doctoral diss., University of Turku, Turku School of Economics, 2024. Annales Universitatis Turkuensis, Ser. E, Oeconomica, Tom. 124.
  • Blösser, Myrthe, and Andrea Weihrauch. "A Consumer Perspective of AI Certification: The Current Certification Landscape, Consumer Approval, and Directions for Future Research." European Journal of Marketing 58, no. 2 (February 8, 2024).
  • Bogucka, Edyta, Marios Constantinides, Julia De Miguel Velazquez, Sanja Šćepanović, Daniele Quercia, and Andrés Gvirtz. "The Atlas of AI Incidents in Mobile Computing: Visualizing the Risks and Benefits of AI Gone Mobile." arXiv, July 22, 2024.
  • Bogucka, Edyta, Marios Constantinides, Sanja Šćepanović, and Daniele Quercia. "AI Design: A Responsible AI Framework for Impact Assessment Reports." IEEE Internet Computing (Early Access), September 2, 2024.
  • Bogucka, Edyta, Sanja Šćepanović, and Daniele Quercia. "Atlas of AI Risks: Enhancing Public Understanding of AI Risks." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 12, no. 1 (October 14, 2024): 33--43.
  • Bolboli Qadikolaei, Somayeh, and Hamid Parsania. 2024. "The Concept of Human-Centricity in Sociological Studies of Artificial Intelligence." Quarterly of Social Studies and Research in Iran 13, no. 3 (September): 425--449.
  • Bommasani, Rishi, Kevin Klyman, Shayne Longpre, Sayash Kapoor, Nestor Maslej, Betty Xiong, Daniel Zhang, and Percy Liang. "The 2023 Foundation Model Transparency Index." Transactions on Machine Learning Research, February 2025.
  • Brandt, Aniek. 2024. Evaluating the Epistemic Condition of Responsibility for AI. Master's thesis, Utrecht University.
  • Bylykbashi, Anxhela, and Lana Gavranović. Mitigating Non-Consumer AI Malfunctions: Response Strategies of Retail Organizations. Master's thesis, Jönköping International Business School, Jönköping University, 2024.
  • Byrd, Don. "A+AI: Threats to Society, Remedies, and Governance." arXiv, September 3, 2024.
  • Cao, Hongpeng, Yanbing Mao, Lui Sha, and Marco Caccamo. Physics-model-guided Worst-case Sampling for Safe Reinforcement Learning. arXiv preprint arXiv:2412.13224, submitted December 17, 2024.
  • Cao, Hongpeng, Yanbing Mao, Yihao Cai, Lui Sha, and Marco Caccamo. Runtime Learning Machine. Preprint submitted to International Conference on Learning Representations (ICLR 2025), September 17, 2024. Last modified February 5, 2025.
  • Cao, Hongpeng, Yanbing Mao, Yihao Cai, Lui Sha, and Marco Caccamo. "Simplex-Enabled Safe Continual Learning Machine." arXiv (preprint), last revised October 6, 2024.
  • Cattell, Sven, Avijit Ghosh, and Lucie-Aimée Kaffee. "Coordinated Flaw Disclosure for AI: Beyond Security Vulnerabilities." arXiv, July 26, 2024.
  • Chakraborti, Mahasweta, Bert Joseph Prestoza, Nicholas Vincent, and Seth Frey. Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure. arXiv, September 27, 2024.
  • Chen, Chuan, Yu Feng, Mengyi Wei, Peng Luo, Shengkai Wang, and Liqiu Meng. "A Hyper-Knowledge Graph System for Research on AI Ethics Cases." Heliyon 10, no. 7 (April 15, 2024): e29048.
  • Chen, Hao, Bhiksha Raj, Xing Xie, and Jindong Wang. "On Catastrophic Inheritance of Large Foundation Models." arXiv, February 2, 2024.
  • Cheong, Ben Chester. "Transparency and Accountability in AI Systems: Safeguarding Well-Being in the Age of Algorithmic Decision-Making." Frontiers in Human Dynamics 6 (July 2, 2024).
  • Chmielinski, Kasia, Sarah Newman, Chris N. Kranzinger, Michael Hind, Jennifer Wortman Vaughan, Margaret Mitchell, Julia Stoyanovich, Angelina McMillan-Major, Emily McReynolds, Kathleen Esfahany, Mary L. Gray, Audrey Chang, and Maui Hudson. The CLeAR Documentation Framework for AI Transparency: Recommendations for Practitioners & Context for Policymakers. Harvard Kennedy School Shorenstein Center on Media, Politics and Public Policy, May 21, 2024.
  • Cho, Deun-Sol, Jae-Min Cho, and Won-Tae Kim. "A Generative Digital Twin for Continually Enhancing the Intended Functional Safety of Cyber--Physical Systems." IEEE Transactions on Reliability (Early Access), October 8, 2024.
  • Corrêa, Nicholas Kluge. "Dynamic Normativity: Necessary and Sufficient Conditions for Value Alignment." arXiv (preprint), June 18, 2024.
  • Cox, Andrew. "11 Ethics Case Studies of Artificial Intelligence for Library and Information Professionals." In New Horizons in Artificial Intelligence in Libraries, edited by Edmund Balnaves, Leda Bultrini, Andrew Cox, and Raymond Uzwyshyn, 156--168. IFLA Publications 185. Berlin: De Gruyter Saur, 2025.
  • Daniels, Owen J., and Dewey Murdick. Enabling Principles for AI Governance. Washington, DC: Center for Security and Emerging Technology, July 2024.
  • Daugherty, Paul, Jeremy Jurgens, John Granger, and Cathy Li. AI Governance Alliance: Briefing Paper Series. World Economic Forum, January 18, 2024.
  • David, Tom, and Nicolas Miailhe. "Assessing the Safety and Robustness of Advanced AI." Politique étrangère 243, no. 3 (2024): 51--65.
  • De Miguel Velázquez, Julia, Sanja Šćepanović, Andrés Gvirtz, and Daniele Quercia. "Decoding Real-World Artificial Intelligence Incidents." Computer 57, no. 11 (November 2024): 71--81.
  • DeVrio, Alicia, Motahhare Eslami, and Kenneth Holstein. "Building, Shifting, & Employing Power: A Taxonomy of Responses From Below to Algorithmic Harm." In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24), 1093--1106. June 5, 2024.
  • Dixon, Ren Bin Lee, and Heather Frase. An Argument for Hybrid AI Incident Reporting: Lessons Learned from Other Incident Reporting Systems. Issue Brief. Center for Security and Emerging Technology, March 2024.
  • Drage, Eleanor, Kerry McInerney, and Rosi Braidotti, eds. 2024. The Good Robot: Why Technology Needs Feminism. London: Bloomsbury Academic.
  • Duarte, Daniel Edler. "Tecnopolíticas da Falha: Dispositivos de Crítica e Resistência a Novas Ferramentas Punitivas [Technopolitics of Failure: Modes of Critique and Resistance to New Punitive Tools]." Revista Brasileira de Ciências Sociais 39 (2024).
  • Dutu, Andrei. "Uniunea Europeană: Instituirea unui Regim Comun de Răspundere Extracontractuală (Delictuală) în Materie de Prejudiciu Causat de Inteligenţa Artificială [Propunere de directivă a Parlamentului European și a Consiliului privind adaptarea normelor în materie de răspundere civilă extracontractuală la inteligența artificială (Directiva privind răspunderea în materie de IA)]." Pandectele Române, no. 4 (April 2024): 211--218..
  • Expósito Jiménez, Víctor J., Georg Macher, Daniel Watzenig, and Eugen Brenner. "Safety of the Intended Functionality Validation for Automated Driving Systems by Using Perception Performance Insufficiencies Injection." Vehicles 6, no. 3 (July 4, 2024): 1164--1184.
  • Faulhaber, Ella, and Charles Chaffin. "Artificial Intelligence in Accounting, Medicine, and Law with Potential Implications for Financial Planning: A Review of Literature." Financial Services Review 32, no. 4 (2024): 1--11.
  • Gagnon, Paul, Misha Benjamin, Justine Gauthier, Catherine Regis, Jenny Lee, and Alexei Nordell-Markovits. "On the Modification and Revocation of Open Source Licences." arXiv, May 29, 2024.
  • Goldkind, Lauri, Joy Ming, and Alex Fink. "AI in the Nonprofit Human Services: Distinguishing Between Hype, Harm, and Hope." Human Service Organizations: Management, Leadership & Governance, published online December 3, 2024.
  • Golpayegani, Delaram. Semantic Frameworks to Support the EU AI Act's Risk Management and Documentation. PhD diss., Trinity College Dublin, University of Dublin, 2024.
  • González Mendoza, Juan Pablo, Felipe Trujillo-Romero, and Juan José Cárdenas Cornejo. Detección de objetos 3D con PointNet para la conducción autónoma [3D Object Detection with PointNet for Autonomous Driving]. Congreso Estudiantil de Inteligencia Artificial Aplicada a la Ingeniería y Tecnología, UNAM, FESC, Estado de México, 2024.
  • Gray, Douglas, and Evan Shellshear. Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics, Without the Hype. Boca Raton, FL: CRC Press, 2024.
  • Greenberg, Ariel M. "A Schema for Harms-Sensitive Reasoning, and an Approach to Populate Its Ontology by Human Annotation." In Putting AI in the Critical Loop: Assured Trust and Autonomy in Human-Machine Teams, edited by Prithviraj Dasgupta, James Llinas, Tony Gillespie, Scott Fouse, William Lawless, Ranjeev Mittu, and Donald Sofge, 265--278. London: Academic Press, 2024.
  • Gross, Nicole. "A Powerful Potion for a Potent Problem: Transformative Justice for Generative AI in Healthcare." AI and Ethics (July 31, 2024).
  • Grosse, Kathrin, Lukas Bieringer, Tarek R. Besold, Battista Biggio, and Alexandre Alahi. "When Your AI Becomes a Target: AI Security Incidents and Best Practices." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 21 (March 24, 2024): 23041--23046.
  • Henman, Paul W. Fay. 2024. "Just AI: Using Socio-Legal Studies of Fairness to Inform Ethical AI in Government." In Socio-Legal Generation: Essays in Honour of Michael Adler, edited by Sharon Cowan and Simon Halliday, 37--54. Palgrave Socio-Legal Studies. Cham: Palgrave Macmillan.
  • Hollanek, Tomasz, and Indira Ganesh. "Easy Wins and Low Hanging Fruit: Blueprints, Toolkits, and Playbooks to Advance Diversity and Inclusion in AI." In In/Convenience: Inhabiting the Logistical Surround, edited by Joshua Neves and Marc Steinberg, 162--175. Amsterdam: Institute of Network Cultures, 2024.
  • Hossain, Mahmood, Hamad Khalid, Avent Prakasa Rao, Mohammad Lootah, Salah Salim Khalaf Al-Mohammedi, and Salih Rashid Majeed. "Comprehensive Review of AI, IoT, and ML in Enhancing Urban Mobility and Reducing Carbon Footprints." Paper presented at the 2024 Third International Conference on Sustainable Mobility Applications, Renewables and Technology (SMART), Dubai, United Arab Emirates, November 22--24, 2024. IEEE.
  • Householder, Allen, Vijay Sarvepalli, Jeff Havrilla, Matthew Churilla, Lena Pons, Shing-hon Lau, Nathan VanHoudnos, Andrew Kompanek, and Lauren McIlvenny. Lessons Learned in Coordinated Disclosure for Artificial Intelligence and Machine Learning Systems. Pittsburgh, PA: Carnegie Mellon University, Software Engineering Institute, August 2024.
  • Howell, Bronwyn E. "WEIRD? Institutions and Consumers' Perceptions of Artificial Intelligence in 31 Countries." July 23, 2024. SSRN.
  • Hundt, Andrew, Julia Schuller, and Severin Kacianka. "Towards Equitable Agile Research and Development of AI and Robotics." arXiv, February 13, 2024.
  • Hussain, Muhammad, Ioanna Iacovides, Tom Lawton, Vishal Sharma, Zoe Porter, Alice Cunningham, Ibrahim Habli, Shireen Hickey, Yan Jia, Phillip Morgan, and Nee Ling Wong. "Development and Translation of Human-AI Interaction Models into Working Prototypes for Clinical Decision-Making." In Proceedings of the 2024 ACM Designing Interactive Systems Conference (DIS '24), 1607--1619. July 1, 2024.
  • Hutiri, Wiebke, Orestis Papakyriakopoulos, and Alice Xiang. "Not My Voice! A Taxonomy of Ethical and Safety Harms of Speech Generators." In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24), 359--376. June 5, 2024.
  • Kiviharju, Mikko. "On the Cybersecurity of Logistics in the Age of Artificial Intelligence." In Artificial Intelligence for Security: Enhancing Protection in a Changing World, edited by Tuomo Sipola, Janne Alatalo, Monika Wolfmayr, and Tero Kokkonen, 189--219. Cham: Springer, 2024.
  • Klingbeil, Artur, Cassandra Grützner, and Philipp Schreck. "Trust and Reliance on AI---An Experimental Study on the Extent and Costs of Overreliance on AI." Computers in Human Behavior 160 (November 2024): 108352.
  • Kloza, Dariusz, Thibaut D'hulst, and Malik Aouadi. "What Could Possibly Go Wrong? On Risks to the Rights and Freedoms of Natural Persons in EU Data Protection Law, Their Typologies and Their Identification." Technology and Regulation (2024): 309--329.
  • Knight, Simon, Cormac McGrath, Olga Viberg, and Teresa Cerratto Pargman. "Learning about AI Ethics from Cases: A Scoping Review of AI Incident Repositories and Cases." Research Square, August 23, 2024.
  • Knoll, Alessandra, ed. Desafios do Direito Frente às Novas Tecnologias. 1st ed. Vol. 1. Guarujá-SP: Editora Científica Digital, June 28, 2024.
  • Koh, Benjamin. "Seeking the Golden Thread in the Black Box: Artificial Intelligence and Personal Injury Law." Precedent (Sydney, N.S.W.), no. 183 (July 2024): 46--50. Sydney: Australian Lawyers Alliance.
  • Kowald, Dominik, Sebastian Scher, Viktoria Pammer-Schindler, Peter Müllner, Kerstin Waxnegger, Lea Demelius, Angela Fessl, Maximilian Toller, Inti Gabriel Mendoza Estrada, Ilija Šimić, Vedran Sabol, Andreas Trügler, Eduardo Veas, Roman Kern, Tomislav Nad, and Simone Kopeinik. "Establishing and Evaluating Trustworthy AI: Overview and Research Challenges." Frontiers in Big Data 7 (November 28, 2024).
  • Kox, Esther S., and Beatrice Beretta. "Evaluating Generative AI Incidents: An Exploratory Vignette Study on the Role of Trust, Attitude, and AI Literacy." In HHAI 2024: Hybrid Human AI Systems for the Social Good, 188--198. Vol. 386 of Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press, 2024.
  • Kuilman, Sietze Kai, Luciano Cavalcante Siebert, Stefan Buijsman, and Catholijn M. Jonker. "How to Gain Control and Influence Algorithms: Contesting AI to Find Relevant Reasons." AI and Ethics (June 5, 2024).
  • Laczi, Szandra Anna, and Valéria Póser. "Impact of Deepfake Technology on Children: Risks and Consequences." In Proceedings of the 2024 IEEE 22nd Jubilee International Symposium on Intelligent Systems and Informatics (SISY), Pula, Croatia, September 19--21, 2024. IEEE, 2024.
  • Lanamäki, Arto, Karin Väyrynen, Heidi Hietala, Elena Parmiggiani, and Polyxeni Vasilakopoulou. 2024. "Not Inevitable: Navigating Labor Displacement and Reinstatement in the Pursuit of AI for Social Good." Communications of the Association for Information Systems 55: 831--845.
  • Lawrence, Neil D., and Jessica Montgomery. "Accelerating AI for Science: Open Data Science for Science." Royal Society Open Science 11, no. 8 (August 21, 2024).
  • Lee, Hao-Ping (Hank), Yu-Ju Yang, Thomas Serban Von Davier, Jodi Forlizzi, and Sauvik Das. "Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks." In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), Article 775, 1--19. May 11, 2024.
  • Lee, Sung Une, Harsha Perera, Boming Xia, Yue Liu, Qinghua Lu, and Liming Zhu. "QB4AIRA: A Question Bank for Responsible AI Risk Assessment." IEEE Software (Early Access), December 9, 2024.
  • Lee, Sung Une, Harsha Perera, Yue Liu, Boming Xia, Qinghua Lu, and Liming Zhu. "Responsible AI Question Bank: A Comprehensive Tool for AI Risk Assessment." arXiv, August 2, 2024.
  • Leibowicz, Claire R., and Christian H. Cardona. "From Principles to Practices: Lessons Learned from Applying Partnership on AI's (PAI) Synthetic Media Framework to 11 Use Cases." arXiv, July 19, 2024.
  • Lu, You, Yifan Tian, Dingji Wang, Bihuan Chen, and Xin Peng. "AdvFuzz: Finding More Violations Caused by the EGO Vehicle in Simulation Testing by Adversarial NPC Vehicles." arXiv (preprint), November 29, 2024.
  • Lu, You, Yifan Tian, Yuyang Bi, Bihuan Chen, and Xin Peng. "DiaVio: LLM-Empowered Diagnosis of Safety Violations in ADS Simulation Testing." In Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2024), 376--388. New York: Association for Computing Machinery, 2024.
  • Maitra, Suvradip, Lyndal Sleep, Suzanna Fay, and Paul Henman. Building a Trauma-Informed Algorithmic Assessment Toolkit. ARC Centre of Excellence for Automated Decision-Making and Society, August 26, 2024.
  • Manheim, David. "Building a Culture of Safety for AI: Comparisons and Challenges." SSRN, July 10, 2024.
  • Mansyl, Vieri, and Windy Gambetta. "A Novel Approach to Explainable AI: Leveraging Ripple Down Rules Algorithm for Knowledge-Based Explanations." In Proceedings of the 2024 11th International Conference on Advanced Informatics: Concept, Theory and Application (ICAICTA), Singapore, September 28--30, 2024. IEEE, 2024.
  • Markovitch, Dmitri G., Rusty A. Stough, and Dongling Huang. "Consumer Reactions to Chatbot Versus Human Service: An Investigation in the Role of Outcome Valence and Perceived Empathy." Journal of Retailing and Consumer Services 79 (July 2024): 103847.
  • May, Richard, Jacob Krüger, and Thomas Leich. "SoK: How Artificial-Intelligence Incidents Can Jeopardize Safety and Security." In Proceedings of the 19th International Conference on Availability, Reliability and Security (ARES '24), Article 44, 1--12. July 30, 2024.
  • McGrath, Quintin. "Responding to the Sharp Rise in AI in the 2023 SIM IT Trends Survey." MIS Quarterly Executive 23, no. 1 (March 2024): Article 8.
  • McGrath, Quintin, Alan R. Hevner, and Gert-Jan de Vreede. "Managing Ethical Risks of Artificial Intelligence in Business Applications." TechRxiv, February 27, 2024.
  • McGregor, Sean. "Open Digital Safety." Computer 57, no. 4 (April 2, 2024): 99--103.
  • McGregor, Sean, Allyson Ettinger, Nick Judd, Paul Albee, Liwei Jiang, Kavel Rao, Will Smith, Shayne Longpre, Avijit Ghosh, Christopher Fiorelli, Michelle Hoang, Sven Cattell, and Nouha Dziri. "To Err Is AI: A Case Study Informing LLM Flaw Reporting Practices." arXiv (preprint), October 15, 2024.
  • Michałkiewicz-Kądziela, Ewa. "Deepfakes: New Challenges for Law and Democracy." In Artificial Intelligence and International Human Rights Law, edited by Michał Balcerzak and Julia Kapelańska-Pręgowska, 145--157. Cheltenham, UK: Edward Elgar Publishing, 2024.
  • Mishra, Saurabh, Anand Rao, Ramayya Krishnan, Bilal Ayyub, Amin Aria, and Enrico Zio. "Reliability, Resilience and Human Factors Engineering for Trustworthy AI Systems." arXiv (preprint), November 13, 2024.
  • Morales, Sergio, Robert Clarisó, and Jordi Cabot. "A DSL for Testing LLMs for Fairness and Bias." In Proceedings of the ACM/IEEE 2024 International Conference on Model Driven Engineering Languages and Systems (MODELS '24), Linz, Austria, September 22--27, 2024.
  • National Institute of Standards and Technology (NIST). Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile. NIST AI 600-1. July 2024.
  • Nedzhvetskaya, Nataliya, and JS Tan. "No Simple Fix: How AI Harms Reflect Power and Jurisdiction in the Workplace." In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24), 422--432. June 5, 2024.
  • Neogi, Trisha. Protecting People with Disabilities: A Guide for Non-Technical Committee Members in Understanding the Regulations Needed to Design Ethical AI. Master's Research Project, OCAD University, May 1, 2024.
  • Palomba, Fabio, Andrea Di Sorbo, Davide Di Ruscio, Filomena Ferrucci, Gemma Catolino, Giammaria Giordano, Dario Di Dario, Gianmario Voria, Viviana Pentangelo, Maria Tortorella, et al. "FRINGE: Context-Aware FaiRness EngineerING in Complex Software Systems." In Proceedings of the 18th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM '24), 608--12. New York: Association for Computing Machinery, 2024.
  • Paeth, Kevin, Daniel Atherton, Nikiforos Pittaras, Heather Frase, and Sean McGregor. Lessons for Editors of AI Incidents from the AI Incident Database. September 24, 2024. arXiv.
  • Perez-de-Viñaspre, Olatz, Olatz Arregi, and Itziar Irigoien. "Adimen artifizialeko alborapena ulertzen (Understanding Artificial Intelligence Bias)." Ekaia: Zientzia eta Teknologia Aldizkaria, in press (2025).
  • Pérez-Ugena Coromina, María. "Sesgo de Género (en IA)." EUNOMÍA. Revista en Cultura de la Legalidad 26 (March 14, 2024): 311--330.
  • O'Connor, Mary I. "Equity360: Gender, Race, and Ethnicity---The Power of AI to Improve or Worsen Health Disparities." Clinical Orthopaedics and Related Research 482, no. 4 (April 2024): 591--594.
  • Ortega-Bolaños, Ricardo, Joshua Bernal-Salcedo, Mariana Germán Ortiz, Julian Galeano Sarmiento, Gonzalo A. Ruz, and Reinel Tabares-Soto. "Applying the Ethics of AI: A Systematic Review of Tools for Developing and Assessing AI-Based Systems." Artificial Intelligence Review 57 (April 5, 2024): 110.
  • Rauh, Maribeth, Nahema Marchal, Arianna Manzini, Lisa Anne Hendricks, Ramona Comanescu, Canfer Akbulut, Tom Stepleton, Juan Mateos-Garcia, Stevie Bergman, Jackie Kay, Conor Griffin, Ben Bariach, Iason Gabriel, Verena Rieser, William Isaac, and Laura Weidinger. "Gaps in the Safety Evaluation of Generative AI." Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7, no. 1 (2024): 1200--1217.
  • Raus, Rachele, Francesca Bisiani, Maria Margherita Mattioda, and Michela Tonti, eds. Multilinguisme européen et IA entre droit, traduction et didactique des langues / Multilinguismo europeo e IA tra diritto, traduzione e didattica delle lingue / European Multilingualism and Artificial Intelligence: The Impacts on Law, Translation and Language Teaching. Turin: Università di Torino, 2024.
  • Rémy, Nicolas, Frédéric Deschamps, and Stéphane Kreckelbergh. "Construire la confiance des ChatBots à base de LLM." Paper presented at Congrès Lambda Mu 24: Les métiers du risque : clés de la réindustrialisation et de la transition écologique, Institut pour la Maîtrise des Risques (IMdR), Bourges, France, October 2024.
  • Rommetveit, Kjetil, and Ingrid Foss Ballo. D 5.1: Case Study Co-Creation Methodology Report. (How) Can You Build Morality into Artificially Intelligent Systems? SUPER MoRRI -- Scientific Understanding and Provision of an Enhanced and Robust Monitoring System for RRI, Version 2. June 14, 2023.
  • Rupe, Jason, and Chris LaPlante. "Introducing the Reliability Society Failure Database." IEEE Reliability Magazine 1, no. 1 (March 2024): 5--9.
  • Salvador, Cole. "Certified Safe: A Schematic for Approval Regulation of Frontier AI." arXiv, August 12, 2024.
  • Saran, Samir, Anulekha Nandi, and Sameer Patil. 'Moving Horizons': A Responsive and Risk-Based Regulatory Framework for A.I. Special Report. Observer Research Foundation, June 28, 2024.
  • Sengupta, Ushnish. "Black Box Algorithmic Decision-Making and Transparency Challenges in Policing Practice: Lessons from Implementation of New Technologies by the Toronto Police Service." In Policing and Intelligence in the Global Big Data Era, Volume II: New Global Perspectives on the Politics and Ethics of Knowledge, edited by Tereza Østbø Kuldova, Helene Oppen Ingebrigtsen Gundhus, and Christin Thea Wathne, 195--233. Palgrave's Critical Policing Studies. Cham: Palgrave Macmillan, 2024.
  • Sharma, Chinmayi. "AI's Hippocratic Oath." Washington University Law Review, forthcoming. Yale Law & Economics Research Paper. March 14, 2024.
  • Sharma, Kavita, and Padmavati Manchikanti. "Artificial Intelligence and Policy in Healthcare Industry." In Artificial Intelligence in Drug Development: Patenting and Regulatory Aspects, 117--144. Frontiers of Artificial Intelligence, Ethics and Multidisciplinary Applications. Singapore: Springer, 2024.
  • Shams, Rifat Ara, Didar Zowghi, and Muneera Bano. "AI for All: Identifying AI Incidents Related to Diversity and Inclusion." arXiv, July 19, 2024.
  • Shane, Tommy Shaffer. AI Incident Reporting: Addressing a Gap in the UK's Regulation of AI. The Centre for Long-Term Resilience, June 2024.
  • Sharma, Kavita, and Padmavati Manchikanti. "Artificial Intelligence and Policy in Healthcare Industry." In Artificial Intelligence in Drug Development, 117--144. Frontiers of Artificial Intelligence, Ethics and Multidisciplinary Applications. Singapore: Springer, 2024.
  • Shoker, Ali, Rehana Yasmin, and Paulo Esteves-Verissimo. 2024. "WIP: Savvy: A Trustworthy Autonomous Vehicles Architecture." Symposium on Vehicles Security and Privacy (VehicleSec) 2024, San Diego, CA, February 26, 2024.
  • Shoker, Ali, Rehana Yasmin, and Paulo Esteves-Verissimo. "Savvy: Trustworthy Autonomous Vehicles Architecture." arXiv, February 8, 2024.
  • Siqueira de Cerqueira, José Antonio, Mamia Agbese, Rebekah Rousi, Nannan Xi, Juho Hamari, and Pekka Abrahamsson. "Can We Trust AI Agents? An Experimental Study Towards Trustworthy LLM-Based Multi-Agent Systems for AI Ethics." arXiv preprint arXiv:2411.08881 [cs.CY], October 25, 2024.
  • Škoro, Ivana Emily. "Blockchain Art Activism: Examining Four Blockchain-Based Artworks for Social, Political, and Environmental Good." Master's thesis, Aalborg University and Media Arts Cultures Consortium, June 7, 2024.
  • Spinner, Thilo, Daniel Fürst, and Mennatallah El-Assady. "iNNspector: Visual, Interactive Deep Model Debugging." arXiv, July 25, 2024.
  • Soudi, Marwa Samih, and Merja Bauters. "AI Guidelines and Ethical Readiness Inside SMEs: A Review and Recommendations." Digital Society 3 (January 31, 2024): Article 3.
  • Stanley, Jeff, and Hannah Lettie. Emerging Risks and Mitigations for Public Chatbots: LILAC v1. MTR240382. McLean, VA: The MITRE Corporation, September 2024.
  • Torkamaan, Helma, Mohammad Tahaei, Stefan Buijsman, Ziang Xiao, Daricia Wilkinson, and Bart P. Knijnenburg. "The Role of Human-Centered AI in User Modeling, Adaptation, and Personalization---Models, Frameworks, and Paradigms." In A Human-Centered Perspective of Intelligent Personalized Environments and Systems, edited by Bruce Ferwerda, Mark Graus, Panagiotis Germanakos, and Marko Tkalčič, 43--84. Human--Computer Interaction Series. Cham: Springer, 2024.
  • Tran, Michelle, and Casey Fiesler. "'It's Not Exactly Meant to Be Realistic': Student Perspectives on the Role of Ethics in Computing Group Projects." In Proceedings of the 2024 ACM Conference on International Computing Education Research (ICER '24), 517--526. August 12, 2024.
  • Tuovinen, Lauri, and Kimmo Halunen. "What Is an AI Vulnerability, and Why Should We Care? Unpacking the Relationship Between AI Security and AI Ethics." In Proceedings of the Conference on Technology Ethics 2024 (Tethics 2024), edited by Thomas Olsson et al., 30--41. CEUR Workshop Proceedings 3901. RWTH Aachen, November 7, 2024.
  • Tyukin, Ivan Y., Tatiana Tyukina, Daniël P. van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, and Penelope Allison. "Coping with AI Errors with Provable Guarantees." Information Sciences 678 (September 2024): 120856.
  • Tyukin, Ivan Y., Tatiana Tyukina, Daniël P. van Helden, Zedong Zheng, Evgeny M. Mirkes, Oliver J. Sutton, Qinghua Zhou, Alexander N. Gorban, and Penelope Allison. "Weakly Supervised Learners for Correction of AI Errors with Provable Performance Guarantees." arXiv, February 13, 2024.
  • Uzwyshyn, Raymond. "Building Library Artificial Intelligence Capacity: Research Data Repositories and Scholarly Ecosystems." In New Horizons in Artificial Intelligence in Libraries, edited by Andrew Cox, Edmund Balnaves, Leda Bultrini, and Raymond Uzwyshyn, 121--140. Berlin: De Gruyter, 2024.
  • Verma, Karishma. "Digital Deception: The Impact of Deepfakes on Privacy Rights." Lex Scientia Law Review 8, no. 2 (2024): 859--896.
  • Vidgen, Bertie, Adarsh Agrawal, Ahmed M. Ahmed, Victor Akinwande, Namir Al-Nuaimi, Najla Alfaraj, Elie Alhajjar, Lora Aroyo, Trupti Bavalatti, Max Bartolo, Borhane Blili-Hamelin, Kurt Bollacker, Rishi Bomassani, Marisa Ferrara Boston, Siméon Campos, Kal Chakra, Canyu Chen, Cody Coleman, Zacharie Delpierre Coudert, Leon Derczynski, Debojyoti Dutta, Ian Eisenberg, James Ezick, Heather Frase, Brian Fuller, Ram Gandikota, Agasthya Gangavarapu, Ananya Gangavarapu, James Gealy, Rajat Ghosh, James Goel, Usman Gohar, Sujata Goswami, Scott A. Hale, Wiebke Hutiri, Joseph Marvin Imperial, Surgan Jandial, Nick Judd, Felix Juefei-Xu, Foutse Khomh, Bhavya Kailkhura, Hannah Rose Kirk, Kevin Klyman, Chris Knotz, Michael Kuchnik, Shachi H. Kumar, Srijan Kumar, Chris Lengerich, Bo Li, Zeyi Liao, Eileen Peters Long, Victor Lu, Sarah Luger, Yifan Mai, Priyanka Mary Mammen, Kelvin Manyeki, Sean McGregor, Virendra Mehta, Shafee Mohammed, Emanuel Moss, Lama Nachman, Dinesh Jinenhally Naganna, Amin Nikanjam, Besmira Nushi, Luis Oala, Iftach Orr, Alicia Parrish, Cigdem Patlak, William Pietri, Forough Poursabzi-Sangdeh, Eleonora Presani, Fabrizio Puletti, Paul Röttger, Saurav Sahay, Tim Santos, Nino Scherrer, Alice Schoenauer Sebag, Patrick Schramowski, Abolfazl Shahbazi, Vin Sharma, Xudong Shen, Vamsi Sistla, Leonard Tang, Davide Testuggine, Vithursan Thangarasa, Elizabeth Anne Watkins, Rebecca Weiss, Chris Welty, Tyler Wilbers, Adina Williams, Carole-Jean Wu, Poonam Yadav, Xianjun Yang, Yi Zeng, Wenhui Zhang, Fedor Zhdanov, Jiacheng Zhu, Percy Liang, Peter Mattson, and Joaquin Vanschoren. "Introducing v0.5 of the AI Safety Benchmark from MLCommons." arXiv, May 13, 2024.
  • Viureanu, Andrei, and Bogdan Ionescu. "AI Vulnerabilities." Paper presented at the 1st Workshop on Artificial Intelligence for Multimedia, AI Multimedia Lab, CAMPUS Research Institute, POLITEHNICA Bucharest, Bucharest, Romania, November 8, 2024.
  • Volkova, Svetlana. "The Dark Side of Deepfakes: Fraud and Cybercrime." In Deepfake Technology Applications and Societal Implications, edited by Gaurav Gupta, Kapil Pandla, Raj K. Kovid, and Sailaja Bohara, 221--242. Hershey, PA: IGI Global, 2024.
  • Wang, Runfeng. Examining Algorithmic Bias Toward Racial Minorities in New Media. Syracuse University, 2024. Renée Crown University Honors Thesis Projects.
  • Warren, Sarah Egan. "Navigating the Changes That AI Is Bringing to Higher Education." UNC System Learning and Technology Journal 2, no. 1 (August 26, 2024): Special Issue - Exploring the Transformative Impact of Artificial Intelligence in Higher Education: Challenges, Opportunities, and Ethical Considerations.
  • Wei, Mengyi, Chenjing Jiao, Chenyu Zuo, Lorenz Hurni, and Liqiu Meng. "How Generative AI Supports Understanding of an Ethically Sensitive AI-Induced Event." Abstracts of the International Cartographic Association 8 (2024): 26. Presented at the 2024 ICA Workshop on AI, Geovisualization, and Analytical Reasoning -- CartoVis24, University of Warsaw, Poland, September 7, 2024.
  • Wodi, Alex. "Artificial Intelligence (AI) Governance: An Overview." May 24, 2024. SSRN.
  • World Economic Forum. Generative AI Governance: Shaping a Collective Global Future in Collaboration with Accenture. AI Governance Alliance Briefing Paper Series, January 18, 2024.
  • Worth, Sophia, Ben Snaith, Arunav Das, Gefion Thuermer, and Elena Simperl. "AI Data Transparency: An Exploration through the Lens of AI Incidents." arXiv, September 5, 2024.
  • Xi, Ran. 2024. "A Systems Approach to Shedding Sunlight on AI Black Boxes." Hofstra Law Review 53 (3), forthcoming 2025.
  • Xu, Wei, Zaifeng Gao, and Marvin Dainoff. "An HCAI Methodological Framework (HCAI-MF): Putting It Into Action to Enable Human-Centered AI." arXiv, originally submitted November 27, 2023, last revised December 21, 2024.
  • Xu, Wei. "A New Design Philosophy: Human-Centered Artificial Intelligence." In Human-AI Interaction: Enabling Human-Centered AI. Beijing: Tsinghua University Press, forthcoming 2024.
  • Xu, Wei. "A 'User Experience 3.0 (UX 3.0)' Paradigm Framework: User Experience Design for Human-Centered AI Systems." arXiv, March 7, 2024.
  • Xu, Wei, and Zaifeng Gao. "An Intelligent Sociotechnical Systems (iSTS) Concept: Toward a Sociotechnically-Based Hierarchical Human-Centered AI Approach." arXiv, July 22, 2024.
  • Xu, Wei, and Zaifeng Gao. "Enabling Human-Centered AI: A Methodological Perspective." In 2024 IEEE 4th International Conference on Human-Machine Systems (ICHMS), Toronto, ON, May 15--17, 2024. IEEE, June 19, 2024.
  • Xu, Wei, Zaifeng Gao, and Liezhong Ge. "New Research Paradigms and Agenda of Human Factors Science in the Intelligence Era." Acta Psychologica Sinica 56, no. 3 (2024): 363--382.
  • Yang, Khoo Wei. Data Relationality: Privacy in the AI Age. Kuala Lumpur: Khazanah Research Institute, October 24, 2024.
  • Yeung, Karen. "Beyond 'AI Boosterism.'" IPPR Progressive Review 31, no. 2 (2024): 114--20.
  • Ződi, Zsolt. "The Conflict of the Engineering and the Legal Mindset in the Artificial Intelligence Act." SSRN, last revised November 6, 2024.

2023

Citations in peer-reviewed journal articles, book chapters, and preprints

  • Ali, S. A., Khan, R., & Ali, S. N. (2023). The Promises and Perils of Artificial Intelligence: An Ethical and Social Analysis. In S. Chakraborty (Ed.), Investigating the Impact of AI on Ethics and Spirituality (pp. 1-24). IGI Global.
  • Apruzzese, G., Anderson, H. S., Dambra, S., Freeman, D., Pierazzi, F., & Roundy, K. (2023). "Real attackers don't compute gradients": Bridging the gap between adversarial ML research and practice. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 1-10). IEEE.
  • Bach, T. A., Kristiansen, J. K., Babic, A., & Jacovi, A. (2023). Unpacking human-AI interaction in safety-critical industries: A systematic literature review. arXiv.
  • Baeza-Yates, R. (2023). An introduction to responsible AI. European Review, 31(4), 406-421.
  • Batool, A., Zowghi, D., & Bano, M. (2023). Responsible AI governance: A systematic literature review. arXiv.
  • Bommasani, R., Klyman, K., Longpre, S., Kapoor, S., Maslej, N., Xiong, B., Zhang, D., & Liang, P. (2023). The Foundation Model Transparency Index. arXiv.
  • Bondi-Kelly, E., Hartvigsen, T., Sanneman, L. M., Sankaranarayanan, S., Harned, Z., Wickerson, G., Gichoya, J. W., Oakden-Rayner, L., Celi, L. A., Lungren, M. P., Shah, J. A., & Ghassemi, M. (2023). Taking off with AI: Lessons from aviation for healthcare. In EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (Article No. 4, pp. 1-14). ACM.
  • Chatterjee, R. (2023). The scope of roboethics in business ethics. 3D... IBA Journal of Management & Leadership, 14(2), 22-27.
  • Chen, P.-Y., & Liu, S. (2024). Holistic adversarial robustness of deep learning models. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15411-15420.
  • Di Mascio, T., Caruso, F., & Peretti, S. (2023). How to make an artificial intelligence algorithm "ecological"? Insights from a holistic perspective. In CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter (Article No. 21, pp. 1-7). ACM.
  • Faivre, J. (2023). The AI Act: Towards global effects? SSRN.
  • Feffer, M., Martelaro, N., & Heidari, H. (2023). The AI Incident Database as an educational tool to raise awareness of AI harms: A classroom exploration of efficacy, limitations, & future improvements. EAAMO '23: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (Article No. 3, pp. 1-11).
  • Greser, J. (2023). Kilka uwag o cyberbezpieczeństwie medycznej AI. In A. Szczęsna & M. Stachoń (Eds.), Cyberbezpieczeństwo AI. AI w cyberbezpieczeństwie (pp. 73-81). NASK - Państwowy Instytut Badawczy. ISBN 978-83-65448-55-2.
  • Groza, A., & Marginean, A. (2023). Brave new world: AI in teaching and learning. In ICERI2023 Proceedings (pp. 8706-8713). Technical University of Cluj-Napoca.
  • Groza, A., & Marginean, A. (2023). Brave new world: Artificial intelligence in teaching and learning. arXiv.
  • Hadshar, R. (2023). A review of the evidence for existential risk from AI via misaligned power-seeking. arXiv.
  • Hong, Y., Lian, J., Xu, L., Min, J., Wang, Y., & Freeman, L. J. (2023). Statistical perspectives on reliability of artificial intelligence systems. Quality Engineering, 35(1), 56-78.
  • Huang, R., Holzapfel, A., Sturm, B., & Kaila, A.-K. (2023). Beyond diverse datasets: Responsible MIR, interdisciplinarity, and the fractured worlds of music. Transactions of the International Society for Music Information Retrieval, 6(1), 43-59.
  • Inoue, S., Nguyen, M.-T., Mizokuchi, H., Nguyen, T.-A. D., Nguyen, H.-H., & Le, D. T. (2023). Towards safer operations: An expert-involved dataset of high-pressure gas incidents for preventing future failures. arXiv.
  • Kanade, A., Bhoite, S., Kanade, S., & Jain, N. (2023). Artificial Intelligence and Morality: A Social Responsibility. Journal of Intelligence Studies in Business, 13(1).
  • Kilhoffer, Z., Nlkolich, A., Sanfilippo, M. R., & Zhou, Z. (2023). AI accountability policy. School of Information Sciences, University of Illinois at Urbana-Champaign.
  • Larsonneur, C. (2023). L'algorithme sert-il les traducteurs ? Conditions et contexte de travail avec les outils de traduction neuronale. In O. Guillon & S. Pickford (Eds.), Approches socio-économiques de la traduction littéraire (Vol. 35, Issue 2, pp. 90-103). Parallèles.
  • Lupo, G. (2023). Risky artificial intelligence: The role of incidents in the path to AI regulation. Law, Technology and Humans, 5(1), 133-152. Faculty of Law, Queensland University of Technology.
  • Marres, N., & Sormani, P. (2023). Testing 'AI': Do we have a situation? A conversation. Universität Siegen.
  • McConvey, K., Guha, S., & Kuzminykh, A. (2023). A human-centered review of algorithms in decision-making in higher education. In CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Article No. 223, pp. 1-15). ACM.
  • McGregor, S. (2023). A scaled multiyear responsible artificial intelligence impact assessment. Computer, 56(8), 20-27.
  • McGregor, S., & Hostetler, J. (2023). Data-centric governance. arXiv.
  • Morgan, P. (2023). Tort liability and autonomous systems accidents: Challenges and future developments. In P. Morgan (Ed.), Tort liability and autonomous systems accidents (pp. 1-26). Edward Elgar Publishing.
  • Pan, C., Gao, Y., & Gu, A. (2023). Modeling operational profile for AI systems: A case study on UAV systems. In 2023 4th International Conference on Intelligent Computing and Human-Computer Interaction (ICHCI) (pp. 1-8). IEEE.
  • Pletcher, S. N. (2023, September 1). Starting Slowly to Go Fast Deep Dive in the Context of AI Pilot Projects.
  • Pletcher, S. (2023). Visual privacy: Current and emerging regulations around unconsented video analytics in retail. arXiv.
  • Rodrigues, R., Resseguier, A., & Santiago, N. (2023). When artificial intelligence fails: The emerging role of incident databases. Public Governance, Administration and Finances Law Review, 8(2), 17-28.
  • Rousi, R., Samani, H., Mäkitalo, N., Vakkuri, V., Linkola, S., Kemell, K.-K., Daubaris, P., Fronza, I., Mikkonen, T., & Abrahamsson, P. (2024). Business and ethical concerns in domestic conversational generative AI-empowered multi-robot systems. In S. Hyrynsalmi, J. Münch, K. Smolander, & J. Melegati (Eds.), Software Business: 14th International Conference, ICSOB 2023, Lahti, Finland, November 27--29, 2023, Proceedings (pp. 173-189). Springer.
  • Schloetzer, J. D., & Yoshinaga, K. (2023). Algorithmic hiring systems: Implications and recommendations for organisations and policymakers. In YSEC Yearbook of Socio-Economic Constitutions 2023: Law and the governance of artificial intelligence (pp. 213-246). Springer.
  • Schloetzer, J. D., & Yoshinaga, K. (2023). Algorithmic hiring systems: Implications and recommendations for organisations and policymakers. Law and the Governance of Artificial Intelligence, Yearbook of Socio-Economic Constitutions. Springer, Cham.
  • Shaffer Shane, T. (2023). AI incidents and 'networked trouble': The case for a research agenda. Big Data & Society, 10(2).
  • Shoker, S., Reddie, A., Barrington, S., Booth, R., Brundage, M., Chahal, H., Depp, M., Drexel, B., Gupta, R., Favaro, M., Hecla, J., Hickey, A., Konaev, M., Kumar, K., Lambert, N., Lohn, A., O'Keefe, C., Rajani, N., Sellitto, M., Trager, R., Walker, L., Wehsener, A., & Young, J. (2023). Confidence-building measures for artificial intelligence: Workshop proceedings. arXiv.
  • Silicki, K. (2023). Cyberbezpieczeństwo systemów wykorzystujących sztuczną inteligencję w świetle raportów ENISA. In A. Szczęsna & M. Stachoń (Eds.), Cyberbezpieczeństwo AI. AI w cyberbezpieczeństwie (pp. 10-21). NASK - Państwowy Instytut Badawczy.
  • Sood, S., & Kim, A. (2023). The golden age of the big data audit: Agile practices and innovations for e-commerce, post-quantum cryptography, psychosocial hazards, artificial intelligence algorithm audits, and deepfakes. International Journal of Innovation and Economic Development, 9(2), 7-23.
  • Stoica, A.-A., & Pica, Ș. (2023). Drones and the ethical politics of public monitoring. Challenges of the Knowledge Society. Public Law, 337-345.
  • Turri, V., & Dzombak, R. (2023). Why we need to know more: Exploring the state of AI incident documentation practices. AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 576-583.
  • Velichkovska, B., Denkovski, D., Gjoreski, H., Kalendar, M., & Osmani, V. (2023). A Survey of Bias in Healthcare: Pitfalls of Using Biased Datasets and Applications. In Artificial Intelligence Application in Networks and Systems (CSOC 2023) (pp. 570-584). Lecture Notes in Networks and Systems, volume 724. Springer.
  • Watson, E., Viana, T., & Zhang, S. (2023). Augmented behavioral annotation tools, with application to multimodal datasets and models: A systematic review. AI, 4(1), 128-171.
  • Winter, C., Hollman, N., & Manheim, D. (2023). Value Alignment for Advanced Artificial Judicial Intelligence. American Philosophical Quarterly, 60(2), 187-203.
  • Wright, L. S. (2023). Artificial intelligence: Why we need it and why we need to be cautious. In M. Lovell, O. S. Moghraby, & R. Waller (Eds.), Digital Mental Health: From Theory to Practice (pp. 60-71). Cambridge University Press.
  • Wu, W., & Liu, S. (2023). A comprehensive review and systematic analysis of artificial intelligence regulation policies. arXiv.
  • Xia, B., Lu, Q., Perera, H., Zhu, L., Xing, Z., Liu, Y., & Whittle, J. (2023). Towards concrete and connected AI risk assessment (C2AIRA): A systematic mapping study. In 2023 IEEE/ACM 2nd International Conference on AI Engineering -- Software Engineering for AI (CAIN) (pp. 27-34). IEEE.
  • Xia, B., Lu, Q., Perera, H., Zhu, L., Xing, Z., Liu, Y., & Whittle, J. (2023). Towards concrete and connected AI risk assessment (C2AIRA): A systematic mapping study. arXiv.
  • Xu, W. (2023). User-centered design (IX): A "user experience 3.0" paradigm framework in the intelligence era. arXiv.
  • Xu, W., & Dainoff, M. (2023). Enabling human-centered AI: A new junction and shared journey between AI and HCI communities. Interactions, 30(1), 42-47.
  • Xu, W., & Dainoff, M. (2023). Enabling human-centered AI: A new junction and shared journey between AI and HCI communities. arXiv.
  • Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2023). Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. International Journal of Human--Computer Interaction, 39(3), 494-518.
  • Zhan, X., Sun, H., & Miranda, S. M. (2023). How does AI fail us? A typological theorization of AI failures. In ICIS 2023 Proceedings: AI in Business and Society.
  • Zhou, L., Moreno-Casares, P. A., Martínez-Plumed, F., Burden, J., Burnell, R., Cheke, L., Ferri, C., Marcoci, A., Mehrbakhsh, B., Moros-Daval, Y., Ó hÉigeartaigh, S., Rutar, D., Schellaert, W., Voudouris, K., & Hernández-Orallo, J. (2023). Predictable artificial intelligence. arXiv.
  • Zhu, Y. (Zhu Yu 朱禹), Chen, G. (Chen Guanze 陈关泽), Lu, Y. (Lu Yongrong 陆泳溶), & Fan, W. (Fan Wei 樊伟). (2023). Generative Artificial Intelligence Governance Action Framework: Content Analysis Based on AIGC Incident Report Texts. 图书情报知识 (Library and Information Knowledge), 40(4), 41-51.
  • Žunić, L., Đukanović, G., & Popović, G. (2023). Rizici vještačke inteligencije: Analiza i implikacije. In 15th International Conference "Information Technology and Application" (ITeO 2023) (Vol. 15, pp. 29-40). Banja Luka, Bosnia and Herzegovina.

Citations in briefs, theses, white papers, and mixed genres

  • Acion, L., Rajngewerc, M., Randall, G., & Etcheverry, L. (2023). Generative AI poses ethical challenges for open science. Nature Human Behaviour, 7(1800--1801).
  • Antunović, J. (2023). Sigurnost komunikacije u kritičnoj infrastrukturi [Undergraduate thesis, Sveučilište u Zagrebu, Fakultet prometnih znanosti]. Repozitorij Fakulteta prometnih znanosti.
  • Agnew, W. (2023). AI ethics and critique for robotics (Publication No. 30636400) [Doctoral dissertation, University of Washington]. ProQuest Dissertations & Theses Global.
  • Attard-Frost, B., & Widder, D. G. (2023). The Ethics of AI Value Chains. arXiv.
  • Bogusz, I. C., & Johnson, D. (2023). AI for the benefit of society: Progress with trust and transparency. Uppsala University, Disciplinary Domain of Humanities and Social Sciences, Faculty of Social Sciences, Department of Informatics and Media. Stockholm: Fores.
  • D'Albergo, E., Fasciani, T., & Giovanelli, G. (2023, January 19-21). Social Powers and Governance of Artificial Intelligence in Urban Security Policies: Video Surveillance in Turin. Paper presented at Re-assembling the social. Re(i)stituting the social. 40 years of AIS, Naples, Italy.
  • Desouza, K. C., & Dawson, G. S. (2023). Pathways to trusted progress with artificial intelligence. IBM Center for The Business of Government.
  • Duarte, A. B. F. (2023). Enhancing portuguese public services: Prototype of a mobile application with a digital assistant [Trabalho de projeto de mestrado, Escola Superior de Comunicação Social]. Instituto Politécnico de Lisboa, Escola Superior de Comunicação Social.
  • Giannini, A. (2023). Criminal behavior and accountability of artificial intelligence systems (Doctoral dissertation, University of Florence and Maastricht University).
  • Hoffmann, M., & Frase, H. (2023). Adding structure to AI harm: An introduction to CSET's AI harm framework. Center for Security and Emerging Technology.
  • Isbell, C., Littman, M. L., & Norvig, P. (2023). Viewpoint: Software Engineering of Machine Learning Systems. Communications of the ACM, 66(2), 35-37.
  • Knight, S., Heggart, K., Dickson-Deane, C., Ford, H., Hunter, J., Johns, A., Kitto, K., Cetindamar Kozanoglu, D., Maher, D., & Narayan, B. (2023). Submission in response to the House Standing Committee on Employment, Education and Training's inquiry into the use of generative artificial intelligence in the Australian education system. House Standing Committee on Employment, Education and Training's inquiry into the use of generative artificial intelligence in the Australian education system.
  • Kutz, J., Göbels, V. P., Brajovic, D., Fresz, B., Renner, N., Omri, S., Neuhüttler, J., Huber, M., & Bienzeisler, B. (2023). KI-Zertifizierung und Absicherung im Kontext des EU AI Act: Herausforderungen und Bedürfnisse aus Sicht von Unternehmen. Fraunhofer IAO.
  • Longstaff, T. (2023). SEI Thoughts on AI T and E and Related Topics. (Technical Report). Carnegie-Mellon University, Pittsburgh, PA. Air Force Life Cycle Management Center, Hanscom AFB, MA. Retrieved from Accession Number: AD1199686.
  • Massei, G. (2023). Algorithmic Trading: An Overview and Evaluation of Its Impact on Financial Markets [Master's thesis, Università Ca' Foscari Venezia].
  • Musser, M., Lohn, A., Dempsey, J. X., Spring, J., Kumar, R. S. S., Leong, B., Liaghati, C., Martinez, C., Grant, C. D., Rohrer, D., Frase, H., Bansemer, J., Rodriguez, M., Regan, M., Chowdhury, R., & Hermanek, S. (2023). Adversarial machine learning and cybersecurity: Risks, challenges, and legal implications. Center for Security and Emerging Technology.
  • Narayanan, M., Seymour, A., Frase, H., & Elmgren, K. (2023). Repurposing the wheel: Lessons for AI standards (Workshop Report). Center for Security and Emerging Technology.
  • NIST. Risk Management Playbook. 2023
  • Sharma, A. (2023). Testing of machine learning algorithms and models (PhD dissertation). Universität Oldenburg.
  • Shneiderman, B. (2023). ACM TechBrief: Safer algorithmic systems (Issue 6). Association for Computing Machinery.
  • Sivakumaran, A. (2023). Investigating consumer perception and speculative AI labels for creative AI usage in media (Master's thesis, KTH, School of Electrical Engineering and Computer Science).
  • Toner, H., Ji, J., Bansemer, J., Lim, L., Painter, C., Corley, C., Whittlestone, J., Botvinick, M., Rodriguez, M., & Kumar, R. S. S. (2023). Skating to where the puck is going: Anticipating and managing risks from frontier AI systems. Center for Security and Emerging Technology.
  • Wang, L. (2023). An urgency for inclusivity: Redesigning datasets for improved representation of LGBTQ+ identity terms in artificial intelligence (A.I.). HSS4 - The Modern Context: Queer Theory and Politics, Professor Barnick.
  • Zhang, J. (2023). Evaluating Artificial Neural Network Robustness for Safety-Critical Systems [Ph.D. dissertation, Technical University of Denmark]. Kgs. Lyngby: Technical University of Denmark.

2022

  • Macrae, Carl. "Learning from the failure of autonomous and intelligent systems: Accidents, safety, and sociotechnical sources of risk." Risk analysis 42.9 (2022): 1999-2025.
  • Felländer, Anna, et al. "Achieving a Data-driven Risk Assessment Methodology for Ethical AI." Digital Society 1.2 (2022): 13.
  • Apruzzese, Giovanni, et al. "" Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice." arXiv preprint arXiv:2212.14315 (2022).
  • Petersen, Eike, et al. "Responsible and regulatory conform machine learning for medicine: A survey of challenges and solutions." IEEE Access 10 (2022): 58375-58418.
  • Schuett, Jonas. "Three lines of defense against risks from AI." arXiv preprint arXiv:2212.08364 (2022).
  • Schiff, Daniel S. "Looking through a policy window with tinted glasses: Setting the agenda for US AI policy." Review of Policy Research.
  • Neretin, Oleksii, and Vyacheslav Kharchenko. "Model for Describing Processes of AI Systems Vulnerabilities Collection and Analysis using Big Data Tools." 2022 12th International Conference on Dependable Systems, Services and Technologies (DESSERT). IEEE, 2022.
  • Durso, Francis, et al. "Analyzing Failures in Artificial Intelligent Learning Systems (FAILS)." 2022 IEEE 29th Annual Software Technology Conference (STC). IEEE, 2022.
  • Kassab, Mohamad, Joanna DeFranco, and Phillip Laplante. "Investigating Bugs in AI-Infused Systems: Analysis and Proposed Taxonomy." 2022 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). IEEE, 2022.
  • Braga, Juliao, et al. "Projeto para o desenvolvimento de um artigo sobre governança de algoritmos e dados." (2022).
  • Secchi, Carlo, and Alessandro Gili. "Digitalisation for sustainable infrastructure: the road ahead." Digitalisation for sustainable infrastructure (2022): 1-326.
  • Groza, Adrian, et al. "Elaborarea cadrului strategic nat, ional în domeniul inteligent, ei artificiale."
  • Braga, Juliao, et al. "Project for the Development of a Paper on Algorithm and Data Governance." (2022). (Original Portuguese).
  • NIST. Risk Management Playbook. 2022
  • Shneiderman, Ben. Human-Centered AI. Oxford University Press, 2022.
  • Schwartz, Reva, et al. "Towards a Standard for Identifying and Managing Bias in Artificial Intelligence." (2022).
  • McGrath, Quintin et al. An Enterprise Risk Management Framework to Design Pro-Ethical AI Solutions." University of South Florida. (2022).
  • Nor, Ahmad Kamal Mohd, et al. "Abnormality Detection and Failure Prediction Using Explainable Bayesian Deep Learning: Methodology and Case Study of Real-World Gas Turbine Anomalies." (2022).
  • Xie, Xuan, Kristian Kersting, and Daniel Neider. "Neuro-Symbolic Verification of Deep Neural Networks." arXiv preprint arXiv:2203.00938 (2022).
  • Hundt, Andrew, et al. "Robots Enact Malignant Stereotypes." 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022.
  • Tidjon, Lionel Nganyewou, and Foutse Khomh. "Threat Assessment in Machine Learning based Systems." arXiv preprint arXiv:2207.00091 (2022).
  • Naja, Iman, et al. "Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information." IEEE Access 10 (2022): 74383-74411.
  • Cinà, Antonio Emanuele, et al. "Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning." arXiv preprint arXiv:2205.01992 (2022).
  • Schröder, Tim, and Michael Schulz. "Monitoring machine learning models: A categorization of challenges and methods." Data Science and Management (2022).
  • Corea, Francesco, et al. "A principle-based approach to AI: the case for European Union and Italy." AI & SOCIETY (2022): 1-15.
  • Carmichael, Zachariah, and Walter J. Scheirer. "Unfooling Perturbation-Based Post Hoc Explainers." arXiv preprint arXiv:2205.14772 (2022).
  • Wei, Mengyi, and Zhixuan Zhou. "AI Ethics Issues in Real World: Evidence from AI Incident Database." arXiv preprint arXiv:2206.07635 (2022).
  • Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions." IEEE Access (2022).
  • Karunagaran, Surya, Ana Lucic, and Christine Custis. "XAI Toolsheet: Towards A Documentation Framework for XAI Tools."
  • Paudel, Shreyasha, and Aatiz Ghimire. "AI Ethics Survey in Nepal."
  • Ferguson, Ryan. "Transform Your Risk Processes Using Neural Networks."
  • Fujitsu Corporation. "AI Ethics Impact Assessment Casebook," 2022
  • Shneiderman, Ben and Du, Mengnan. "Human-Centered AI: Tools" 2022
  • Salih, Salih. "Understanding Machine Learning Interpretability." Medium. 2022
  • Garner, Carrie. "Creating Transformative and Trustworthy AI Systems Requires a Community Effort." Software Engineering Institute. 2022
  • Weissinger, Laurin, AI, Complexity, and Regulation (February 14, 2022). The Oxford Handbook of AI Governance

2021

  • Arnold, Z., Toner, H., CSET Policy. "AI Accidents: An Emerging Threat." (2021).
  • Aliman, Nadisha-Marie, Leon Kester, and Roman Yampolskiy. "Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions." Philosophies 6.1 (2021): 6.
  • Falco, Gregory, and Leilani H. Gilpin. "A stress testing framework for autonomous system verification and validation (v&v)." 2021 IEEE International Conference on Autonomous Systems (ICAS). IEEE, 2021.
  • Petersen, Eike, et al. "Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Technical Challenges and Solutions." arXiv preprint arXiv:2107.09546 (2021).
  • John-Mathews, Jean-Marie. AI ethics in practice, challenges and limitations. Diss. Université Paris-Saclay, 2021.
  • Macrae, Carl. "Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety and Sociotechnical Sources of Risk." Safety and Sociotechnical Sources of Risk (June 4, 2021) (2021).
  • Hong, Matthew K., et al. "Planning for Natural Language Failures with the AI Playbook." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.
  • Ruohonen, Jukka. "A Review of Product Safety Regulations in the European Union." arXiv preprint arXiv:2102.03679 (2021).
  • Kalin, Josh, David Noever, and Matthew Ciolino. "A Modified Drake Equation for Assessing Adversarial Risk to Machine Learning Models." arXiv preprint arXiv:2103.02718 (2021).
  • Aliman, Nadisha Marie, and Leon Kester. "Epistemic defenses against scientific and empirical adversarial AI attacks." CEUR Workshop Proceedings. Vol. 2916. CEUR WS, 2021.
  • John-Mathews, Jean-Marie. L’Éthique de l’Intelligence Artificielle en Pratique. Enjeux et Limites. Diss. université Paris-Saclay, 2021.
  • Smith, Catherine. "Automating intellectual freedom: Artificial intelligence, bias, and the information landscape." IFLA Journal (2021): 03400352211057145

If you have a scholarly work that should be added here, please contact us.

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd