What is the CSET Taxonomy?
The Center for Security and Emerging Technology (CSET) taxonomy is a general taxonomy of AI incidents. There are a large number of classified attributes, including ones pertaining to safety, fairness, industry, geography, timing, and cost.All classifications within the CSET taxonomy are first applied by one CSET annotator and reviewed by another CSET annotator before the classifications are finalized. The combination of a rigorously defined coding set and the completeness with which it has been applied make the CSET taxonomy the AIID's gold standard for taxonomies. Nevertheless, the CSET taxonomy is an ongoing effort and you are invited to report any errors you may discover in its application.
How do I explore the taxonomy?
All taxonomies can be used to filter incident reports within the Discover Application. The taxonomy filters work similarly to how you filter products on an E-commerce website. Use the search field at the bottom of the “Classifications” tab to find the taxonomy field you would like to filter with, then click the desired value to apply the filter.
A policy research organization within Georgetown University’s Walsh School of Foreign Service, CSET produces data-driven research at the intersection of security and technology, providing nonpartisan analysis to the policy community. CSET is currently focusing on the effects of progress in artificial intelligence (AI), advanced computing and biotechnology. CSET seeks to prepare a new generation of decision-makers to address the challenges and opportunities of emerging technologies. (Read more).
Full description of the incident
A plain-language description of the incident in one paragraph or less.
Short description of the incident
A one-sentence description of the incident.
Overall severity of harm Searchable in Discover App
An estimate of the overall severity of harm caused. "Negligible" harm means minor inconvenience or expense, easily remedied. “Minor” harm means limited damage to property, social stability, the political system, or civil liberties occurred or nearly occurred. "Moderate" harm means that humans were injured (but not killed) or nearly injured, or that financial, property, social, or political interests or civil liberties were materially affected (or nearly so affected). "Severe" harm means that a small number of humans were or were almost gravely injured or killed, or that financial, property, social, or political interests or civil liberties were significantly disrupted at at least a regional or national scale (or nearly so disrupted). "Critical" harm means that many humans were or were almost killed, or that financial, property, social, or political interests were seriously disrupted at a national or global scale (or nearly so disrupted).
Description of AI system involved
A brief description of the AI system(s) involved in the incident, including the system’s intended function, the context in which it was deployed, and any available details about the algorithms, hardware, and training data involved in the system.
Sector of deployment Searchable in Discover App
The primary economic sector in which the AI system(s) involved in the incident were operating.
Relevant AI functions Searchable in Discover App
Indicates whether the AI system(s) were intended to perform any of the following high-level functions: "Perception," i.e. sensing and understanding the environment; "Cognition," i.e. making decisions; or "Action," i.e. carrying out decisions through physical or digital means.
Named entities Searchable in Discover App
All named entities (such as people, organizations, locations, and products - generally proper nouns) that seem to have a significant relationship with this event, as indicated by the available evidence.
Organization or person responsible for the technology Searchable in Discover App
A list of parties (up to three) that were responsible for the relevant AI tool or system, i.e. that had operational control over the AI-related system causing harm (or control over those who did).
The date the incident began.
The date the incident ended.
Probable level of intent Searchable in Discover App
Indicates whether the incident was deliberate/expected or accidental, based on the available evidence. "Deliberate or expected" applies if it is established or highly likely that the system acted more or less as expected, from the perspective of at least one of the people or entities responsible for it. “Accident” applies if it is established or highly likely that the harm arose from the system acting in an unexpected way. "Unclear" applies if the evidence is contradictory or too thin to apply either of the above labels.
Infrastructure sectors affected Searchable in Discover App
Where applicable, this field indicates if the incident caused harm to any of the economic sectors designated by the U.S. government as critical infrastructure.
Total financial cost
The stated or estimated financial cost of the incident, if reported.
Laws covering the incident
Relevant laws under which entities involved in the incident may face legal liability as a result of the incident.
Description of the data inputs to the AI systems
A brief description of the data that the AI system(s) used or were trained on.
Public sector deployment Searchable in Discover App
"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
Nature of end user Searchable in Discover App
"Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.
Level of autonomy Searchable in Discover App
The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.
Physical system Searchable in Discover App
Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.
Causative factors within AI system Searchable in Discover App
Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.