What are these resources?

The following resources have been associated with incidents in the database to provide tools and processes to persons and companies looking for best practices in the prevention or mitigation of similar incidents in the future. While you can explore incidents according to their resources, most people will benefit more from finding relevant incidents then looking at the resources associated with the incidents.


The incident resources are in-development and has not been applied to a large number of incidents. If you have a resource that is not currently included, please consider contacting us with details about the resource and we will work to add it to the AI Incident Database.

Taxonomy Fields

Datasheets for Datasets Searchable in Discover App

Datasheets for datasets is a tool for documenting the datasets used for training and evaluating machine learning models. The aim of datasheets is to increase dataset transparency and facilitate better communication between dataset creators and dataset consumers (e.g., those using datasets to train machine learning models). Datasheets encourage dataset creators to carefully reflect on the dataset creation process, enabling them to uncover possible sources of bias in their data or unintentional assumptions that they’ve made. For dataset consumers, the information contained within datasheets can help ensure that the dataset is the right choice for the task at hand. Datasheets can optionally be exposed to end users for increased transparency and trust. Datasheets contain questions about dataset motivation, composition, collection, pre-processing, labeling, intended uses, distribution, and maintenance. Crucially, and unlike other tools for meta-data extraction, datasheets are not automated, but are intended to capture information known only to the dataset creators and often lost or forgotten over time. Read More

Microsoft AI Fairness Checklist Searchable in Discover App

Many organizations have published principles to guide the responsible development and deployment of AI systems, but they are largely left to practitioners to put them into practice. Other organizations have therefore produced AI ethics checklists, including checklists for specific concepts, such as fairness.

Checklists in other domains, such as aviation, medicine, and structural engineering, have had well-documented success in saving lives and improving professional practices. But unless checklists are grounded in practitioners’ needs, they may be misused or ignored.

The fairness checklist research project explores how checklists may be designed to support the development of more fair AI products and services. To do this, we work with AI practitioners who the checklists are intended to support, to solicit their input on the checklist design and support the adoption and integration of the checklist into AI design, development, and deployment lifecycles.

Our first studies in this project have led to a fairness checklist co-designed with practitioners, as well as insights into how organizational and team processes shape how AI teams address fairness harms. Read More