CSETv1_Annotator-1
Qu'est-ce que la taxonomie GMF ?
La taxonomie des Buts, Méthodes et Échecs (GMF) est une taxonomie d'analyse des causes d'échec qui met en relation les objectifs du déploiement du système, les méthodes du système et leurs probables défaillances. Des détails sur le processus sont disponibles dans le travail récent publié pour le papier SafeAI.
Comment explorer la taxonomie ?
Toutes les taxonomies peuvent être utilisées pour filtrer les rapports d'incidents au sein de l'Application Discover. Les filtres de taxonomie fonctionnent de manière similaire à la manière dont vous filtrez les produits sur un site Web de commerce électronique. Utilisez le champ de recherche en bas de l'onglet « Classifications » pour trouver le champ de taxonomie que vous souhaitez filtrer, puis cliquez sur la valeur souhaitée pour appliquer le filtre.
À propos de la Collaboration IA Responsable
La Base de Données d'Incidents IA est un projet collaboratif de nombreuses personnes et organisations. Les détails sur les personnes et les organisations contribuant à cette taxonomie particulière apparaîtront ici, tandis que vous pouvez en apprendre davantage sur la Collab elle-même sur les pages d'accueil home et about de la base de données d'incidents.
Les responsables de cette taxonomie incluent,
Champs de Taxonomie
If harms were potentially unevenly distributed among people, on what basis?
Définition: Multiple can occur.
Genetic information refers to information about a person’s genetic tests or the genetic tests of their relatives. Genetic information can predict the manifestation of a disease or disorder.
Indicates the sector in which the AI system is deployed
Définition: Indicate the sector in which the AI system is deployed
There could be multiple entries for this field.
Did the incident occur in a domain with physical objects ?
Définition: “Yes” if the AI system(s) is embedded in hardware that can interact with, affect, and change the physical objects (cars, robots, medical facilities, etc.). Mark “No” if the system cannot. This includes systems that inform, detect, predict, or recommend.
Did the AI incident occur in the entertainment industry?
Définition: “Yes” if the sector in which the AI was used is associated with entertainment. “No” if it was used in a different, clearly identifiable sector. “Maybe” if the sector of use could not be determined.
Was the incident about a report, test, or study of data instead of the AI itself?
Définition: “Yes” if the incident is about a report, test, or study of the data and does not discuss an instance of injury, damage, or loss. “Maybe” if it is unclear. Otherwise mark “No.”
Le système signalé (même si l'implication de l'IA est inconnue) a-t-il été déployé ou vendu aux utilisateurs ?
Définition: “Yes” if the involved system was deployed or sold to users. “No” if it was not. “Maybe” if there is not enough information or if the use is unclear.
Was this a test or demonstration of an AI system done by developers, producers or researchers (versus users) in controlled conditions?
Définition: “Yes” if it was a test/demonstration performed by developers, producers or journalists in controlled conditions. “No” if it was not a test/demonstration. “No” if the test/demonstration was done by a user. “No” if the test/demonstration was in operational or uncontrolled conditions. “Maybe” otherwise.
Was this a test or demonstration of an AI system done by developers, producers or researchers (versus users) in operational conditions?
Définition: “Yes” if it was a test/demonstration performed by developers, producers or journalists in controlled conditions. “No” if it was not a test/demonstration. “No” if the test/demonstration was done by a user. “No” if the test/demonstration was in controlled or non-operational conditions. “Maybe” otherwise.
Was this a test or demonstration done by users in controlled conditions?
Définition: “Yes” if it was a test/demonstration performed by users in controlled conditions. “No” if it was not a test/demonstration. “No” if the test/demonstration was done by developers, producers or researchers. “No” if the test/demonstration was in controlled or non-controlled conditions.“Maybe” otherwise.
Was this a test or demonstration done by users in operational conditions?
Définition: “Yes” if it was a test/demonstration performed by users in operational conditions. “No” if it was not a test/demonstration. “No” if the test/demonstration was done by developers, producers or researchers. “No” if the test/demonstration was in controlled or non-operational conditions.“Maybe” otherwise.
Incident occurred in a domain where we could expect harm to occur?
Définition: Using the answers to the 8 domain questions, assess if the incident occurred in a domain where harm could be expected to occur. If you are unclear, input “maybe.”
Did tangible harm (loss, damage or injury ) occur?
Définition: An assessment of whether tangible harm, imminent tangible harm, or non-imminent tangible harm occurred. This assessment does not consider the context of the tangible harm, if an AI was involved, or if there is an identifiable, specific, and harmed entity. It is also not assessing if an intangible harm occurred. It is only asking if tangible harm occurred and what its imminency was.
Does the incident involve an AI system?
Définition: An assessment of whether or not an AI system was involved. It is sometimes difficult to judge between an AI and an automated system or expert rules system. In these cases select “maybe”
Can the technology be directly and clearly linked to the adverse outcome of the incident
Définition: An assessment of the technology's involvement in the chain of harm. "Yes" indicates that the technology was involved in harm, its behavior can be directly linked to the harm, and the harm may not have occurred if the technology acted differently. "No", indicates that the technology's behavior cannot be linked to the harm outcome. "Maybe" indicates that the link is unclear.
There is a potentially identifiable specific entity that experienced the harm
Définition: “Yes” if it is theoretically possible to both specify and identify the entity. Having that information is not required. The information just needs to exist and be potentially discoverable. “No” if there are not any potentially identifiable specific entities or if the harmed entities are a class or subgroup that can only be characterized.
Annotator's AI tangible harm level assessment
Définition: An assessment of the AI tangible harm level, which takes into account the CSET definitions of AI tangible harm levels, along with the inputs for annotation fields about the AI, harm, chain of harm, and entity.
Did this impact people's access to critical or public services (health care, social services, voting, transportation, etc)?
Définition: Did this impact people's access to critical or public services (health care, social services, voting, transportation, etc)?
Was this a violation of human rights, civil liberties, civil rights, or democratic norms?
Définition: Indicate if a violation of human rights, civil rights, civil liberties, or democratic norms occurred.
Was a minor involved in the incident (disproportionally treated or specifically targeted/affected)
Définition: Indicate if a minor was disproportionately targeted or affected
Was detrimental content (misinformation, hate speech) involved?
Définition: Detrimental content can include deepfakes, identity misrepresentation, insults, threats of violence, eating disorder or self harm promotion, extremist content, misinformation, sexual abuse material, and scam emails. Detrimental content in itself is often not harmful, however, it can lead to or instigate injury, damage, or loss.
Was a group of people or an individual treated differently based upon a protected characteristic?
Définition: Protected characteristics include religion, commercial facilities, geography, age, sex, sexual orientation or gender identity, familial status (e.g., having or not having children) or pregnancy, disability, veteran status, genetic information, financial means, race or creed, Ideology, nation of origin, citizenship, and immigrant status.
At the federal level in the US, age is a protected characteristic for people over the age of 40. Minors are not considered a protected class. For this reason the CSET annotation taxonomy has a separate field to note if a minor was involved.
Only mark yes if there is clear evidence discrimination occurred. If there are conflicting accounts, mark unsure. Do not mark that discrimination occurred based on expectation alone.
Does the incident involve an AI system?
Définition: An assessment of whether or not an AI system was involved. It is sometimes difficult to judge between an AI and an automated system or expert rules system. In these cases select “maybe”
Can the technology be directly and clearly linked to the adverse outcome of the incident?
Définition: An assessment of the technology's involvement in the chain of harm. "Yes" indicates that the technology was involved in harm, its behavior can be directly linked to the harm, and the harm may not have occurred if the technology acted differently. "No", indicates that the technology's behavior cannot be linked to the harm outcome. "Maybe" indicates that the link is unclear.
There is a characterizable class or subgroup of entities that experienced the harm
Définition: A characterizable class or subgroup are descriptions of different populations of people. Often they are characteristics by which people qualify for special protection by a law, policy, or similar authority.
Sometimes, groups may be characterized by their exposure to the incident via geographical proximity (e.g., ‘visitors to the park’) or participation in an activity (e.g.,‘Twitter users’).
The annotator’s assessment of if an AI special interest intangible harm occurred.
Définition: AI tangible harm is determined in a different field. The determination of a special interest intangible harm is not dependant upon the AI tangible harm level.
Indicates whether the AI system is deployed in the public sector
Définition: Indicate whether the AI system is deployed in the public sector. The public sector is the part of the economy that is controlled and operated by the government.
Niveau d'Autonomie
Définition: Autonomy1: The system operates independently without simultaneous human oversight, interaction, or intervention.
Autonomy2: The system operates independently but with human oversight, where a human can observe and override the system’s decisions in real time.
Autonomy3: The system does not independently make decisions but instead provides information to a human who actively chooses to proceed with the AI’s information.
Was the AI intentionally developed or deployed to perform the harm?
Définition: Indicates if the system was designed to do harm. If it was designed to perform harm, the field will indicate if the AI system did or did not create unintended harm–i.e. was the reported harm the harm that AI was expected to perform or a different unexpected harm?
AI tools and methods
Définition: Describe the tools and methods that enable the AI’s application.
It is likely that the annotator will not have enough information to complete this field. If this occurs, enter unclear
This is a freeform field. Some possible entries are
- unclear
- reinforcement learning
- neural networks
- decision trees
- bias mitigation
- optimization
- classifier
- NLP/text analytics
- continuous learning
- unsupervised learning
- supervised learning
- clustering
- prediction
- rules
- random forest
AI tools and methods are the technical building blocks that enable the AI’s application.
The number of the incident in the AI Incident Database.
Définition: The number of the incident in the AI Incident Database.
AI Tangible Harm Level Notes
Définition: If for 3.5 you select unclear or leave it blank, please provide a brief description of why.
You can also add notes if you want to provide justification for a level
Input any notes that may help explain your answers.
Définition: Input any notes that may help explain your answers.
Was there a special interest intangible harm or risk of harm?
Définition: An assessment of whether a special interest intangible harm occurred. This assessment does not consider the context of the intangible harm, if an AI was involved, or if there is characterizable class or subgroup of harmed entities. It is also not assessing if an intangible harm occurred. It is only asking if a special interest intangible harm occurred.
If for 5.5 you select unclear or leave it blank, please provide a brief description of why. You can also add notes if you want to provide justification for a level.
Définition: If for 5.5 you select unclear or leave it blank, please provide a brief description of why.
You can also add notes if you want to provide justification for a level.
The year in which the incident first occurred.
Définition: The year in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the year, estimate. Otherwise, leave blank.
The month in which the incident first occurred.
Définition: The month in which the incident occurred. If there are multiple harms or occurrences of the incident, list the earliest. If a precise date is unavailable, but the available sources provide a basis for estimating the month, estimate. Otherwise, leave blank.
The day on which the first incident occurred.
Définition: The day on which the incident occurred. If a precise date is unavailable, leave blank.
Enter in the format of DD
Is the date estimated?
Définition: “Yes” if the data was estimated. “No” otherwise.
Was the AI interacting with another AI?
Définition: This happens very rarely but is possible. Examples include two chatbots having a conversation with each other, or two autonomous vehicles in a crash.
Is the AI embedded in a physical system or have a physical presence?
Définition: This question is slightly different from the one in field 2.1.1. That question asks about there being interaction with physical objects–an ability to manipulate or change. A system can be embedded in a physical object and able to interact with the physical environment, e.g. a vacuum robot. A system can be embedded in a physical object and not interact with a physical environment, e.g. a camera system that only records images when the AI detects that dogs are present. AI systems that are accessed through API, web-browser, etc by using a mobile device or computer are not considered to be embedded in hardware systems. They are accessed through hardware.
If the incident occurred at a specific known location, note the city.
Définition: If the incident occurred at a specific known location, note the city. If there are multiple relevant locations, enter multiple city/state/country values.
If the incident occurred at a specific known location, note the state/province.
Définition: If the incident occurred at a specific known location, note the state/province. If there are multiple relevant locations, enter multiple city/state/country values.
If the incident occurred at a specific known location, note the country.
Définition: Follow ISO 3166 for the 2-letter country codes.
If there are multiple relevant locations, enter multiple city/state/country values.
Location Region
Définition: Use this reference to map countries to regions: https://www.dhs.gov/geographic-regions
Which critical infrastructure sectors were affected, if any?
Définition: Which critical infrastructure sectors were affected, if any?
A record of any abnormal or atypical operational conditions that occurred.
Définition: A record of any abnormal or atypical operational conditions that occurred. This field is most often blank.
Notes (Environmental and Temporal Characteristics)
Définition: Input any notes that may help explain your answers.
Characterizing Entities and the Harm
Définition: Characterizing Entities and the Harm
How many human lives were lost?
Définition: This field cannot be greater than zero if the harm is anything besides ‘Physical health/safety.’
How many humans were injured?
Définition: This field cannot be greater than zero if the harm is anything besides 'Physical health/safety'.
All reported injuries should count, regardless of their severity level. If a person lost their limb and another person scraped their elbow, both cases would be considered injuries. Do not include the number of deaths in this count.
Are any quantities estimated?
Définition: Indicates if the amount was estimated.
Notes ( Tangible Harm Quantities Information)
Définition: Input any notes that may help explain your answers.
Description of the AI system involved
Définition: Describe the AI system in as much detail as the reports will allow.
A high level description of the AI system is sufficient, but if more technical details about the AI system are available, include them in the description as well.
Description of data inputs to the AI system
Définition: This is a freeform field that can have any value. There could be multiple entries for this field.
Common ones include
- still images
- video
- text
- speech
- Personally Identifiable Information
- structured data
- other
- unclear
Still images are static images. Video images consist of moving images. Text and speech data are considered an important category of unstructured data. They consist of written and spoken words that are not in a tabular format. Personally identifiable information is data that can uniquely identify an individual and may contain sensitive information. Structured data is often in a tabular, machine readable format and can typically be used by an AI system without much preprocessing.
Avoid using ‘unstructured data’ data in this field. Instead specify the type of unstructured data; text, images, audio files, etc. It is ok to use ‘structured data’ in this field.
Record what the media report explicitly states. If the report does not explicitly state an input modality but it is likely that a particular kind of input contributed to the harm or near harm, record that input. If you are still unsure, do not record anything.
Notes (Information about AI System)
Définition: Input any notes that may help explain your answers.
Into what type of physical system was the AI integrated, if any?
Définition: Describe the type of physical system that the AI was integrated into.
AI task or core application area
Définition: Describe the AI’s application.
It is likely that the annotator will not have enough information to complete this field. If this occurs, enter unclear.
This is a freeform field. Some possible entries are
- unclear
- human language technologies
- computer vision
- robotics
- automation and/or optimization
- other
The application area of an AI is the high level task that the AI is intended to perform. It does not describe the technical methods by which the AI performs the task. Considering what an AI’s technical methods enable it to do is another way of arriving at what an AI’s application is.
It is possible for multiple application areas to be involved. When possible pick the principle or domain area, but it is ok to select multiple areas.
Notes (AI Functionality and Techniques)
Définition: Input any notes that may help explain your answers.