Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer

CSETv0

Qu'est-ce que la taxonomie GMF ?

La taxonomie des Buts, Méthodes et Échecs (GMF) est une taxonomie d'analyse des causes d'échec qui met en relation les objectifs du déploiement du système, les méthodes du système et leurs probables défaillances. Des détails sur le processus sont disponibles dans le travail récent publié pour le papier SafeAI.

Comment explorer la taxonomie ?

Toutes les taxonomies peuvent être utilisées pour filtrer les rapports d'incidents au sein de l'Application Discover. Les filtres de taxonomie fonctionnent de manière similaire à la manière dont vous filtrez les produits sur un site Web de commerce électronique. Utilisez le champ de recherche en bas de l'onglet « Classifications » pour trouver le champ de taxonomie que vous souhaitez filtrer, puis cliquez sur la valeur souhaitée pour appliquer le filtre.

À propos de la Collaboration IA Responsable

La Base de Données d'Incidents IA est un projet collaboratif de nombreuses personnes et organisations. Les détails sur les personnes et les organisations contribuant à cette taxonomie particulière apparaîtront ici, tandis que vous pouvez en apprendre davantage sur la Collab elle-même sur les pages d'accueil home et about de la base de données d'incidents.

Les responsables de cette taxonomie incluent,

  • Nikiforos Pittaras
  • Sean McGregor

Champs de Taxonomie

Overall severity of harm Recherchable dans l'Application Discover

Découvrir:
  • Negligible
    46 Incidents
  • Minor
    19 Incidents
  • Unclear/unknown
    16 Incidents
  • Moderate
    12 Incidents
  • Severe
    6 Incidents

Définition: An estimate of the overall severity of harm caused. "Negligible" harm means minor inconvenience or expense, easily remedied. “Minor” harm means limited damage to property, social stability, the political system, or civil liberties occurred or nearly occurred. "Moderate" harm means that humans were injured (but not killed) or nearly injured, or that financial, property, social, or political interests or civil liberties were materially affected (or nearly so affected). "Severe" harm means that a small number of humans were or were almost gravely injured or killed, or that financial, property, social, or political interests or civil liberties were significantly disrupted at at least a regional or national scale (or nearly so disrupted). "Critical" harm means that many humans were or were almost killed, or that financial, property, social, or political interests were seriously disrupted at a national or global scale (or nearly so disrupted).

Uneven distribution of harms basis Recherchable dans l'Application Discover

Découvrir:
  • Race
    23 Incidents
  • Sex
    13 Incidents
  • Religion
    7 Incidents
  • National origin or immigrant status
    6 Incidents
  • Age
    5 Incidents

Définition: If harms were unevenly distributed, this field indicates the basis or bases on which they were unevenly distributed.

Harm type Recherchable dans l'Application Discover

Découvrir:
  • Harm to social or political systems
    19 Incidents
  • Psychological harm
    18 Incidents
  • Harm to physical health/safety
    17 Incidents
  • Harm to civil liberties
    16 Incidents
  • Financial harm
    12 Incidents

Définition: Indicates the type(s) of harm caused or nearly caused by the incident.

System developer Recherchable dans l'Application Discover

Découvrir:
  • Google
    18 Incidents
  • Amazon
    6 Incidents
  • Tesla
    4 Incidents
  • Facebook
    4 Incidents
  • Knightscope
    3 Incidents

Définition: The entity that created the AI system.

Sector of deployment Recherchable dans l'Application Discover

Découvrir:
  • Information et communication
    26 Incidents
  • Transport et stockage
    13 Incidents
  • Arts, spectacles et loisirs
    13 Incidents
  • Administration publique et défense
    12 Incidents
  • Activités de services administratifs et de support
    7 Incidents

Définition: The primary economic sector in which the AI system(s) involved in the incident were operating.

Relevant AI functions Recherchable dans l'Application Discover

Découvrir:
  • Cognition
    80 Incidents
  • Perception
    66 Incidents
  • Action
    57 Incidents
  • Unclear
    7 Incidents

Définition: Indicates whether the AI system(s) were intended to perform any of the following high-level functions: "Perception," i.e. sensing and understanding the environment; "Cognition," i.e. making decisions; or "Action," i.e. carrying out decisions through physical or digital means.

AI tools and techniques used Recherchable dans l'Application Discover

Découvrir:
  • machine learning
    19 Incidents
  • Facial recognition
    6 Incidents
  • open-source
    6 Incidents
  • natural language processing
    5 Incidents
  • environmental sensing
    5 Incidents

Définition: Open-ended tags that indicate the hardware and software involved in the AI system(s).

AI functions and applications used Recherchable dans l'Application Discover

Découvrir:
  • decision support
    10 Incidents
  • recommendation engine
    9 Incidents
  • autonomous driving
    9 Incidents
  • Facial recognition
    8 Incidents
  • image recognition
    8 Incidents

Définition: Open-ended tags that describe the functions and applications of the AI system.

Location Recherchable dans l'Application Discover

Découvrir:
  • Global
    27 Incidents
  • United States
    6 Incidents
  • Los Angeles, CA
    2 Incidents
  • New Zealand
    2 Incidents
  • Palo Alto, CA
    2 Incidents
Loading...

Définition: The location or locations where the incident played out.

Named entities Recherchable dans l'Application Discover

Découvrir:
  • Google
    18 Incidents
  • Amazon
    8 Incidents
  • Microsoft
    6 Incidents
  • Tesla
    4 Incidents
  • Facebook
    4 Incidents

Définition: All named entities (such as people, organizations, locations, and products - generally proper nouns) that seem to have a significant relationship with this event, as indicated by the available evidence.

Party responsible for AI system Recherchable dans l'Application Discover

Découvrir:
  • Google
    19 Incidents
  • Amazon
    7 Incidents
  • Tesla
    5 Incidents
  • Facebook
    4 Incidents
  • Apple
    4 Incidents

Définition: A list of parties (up to three) that were responsible for the relevant AI tool or system, i.e. that had operational control over the AI-related system causing harm (or control over those who did).

Harm nearly missed? Recherchable dans l'Application Discover

Découvrir:
  • Unclear/unknown
    44 Incidents
  • Harm caused
    41 Incidents
  • Near miss
    15 Incidents

Définition: Was harm caused, or was it a near miss?

Probable level of intent Recherchable dans l'Application Discover

Découvrir:
  • Accident
    72 Incidents
  • Unclear
    24 Incidents
  • Deliberate or expected
    4 Incidents

Définition: Indicates whether the incident was deliberate/expected or accidental, based on the available evidence. "Deliberate or expected" applies if it is established or highly likely that the system acted more or less as expected, from the perspective of at least one of the people or entities responsible for it. “Accident” applies if it is established or highly likely that the harm arose from the system acting in an unexpected way. "Unclear" applies if the evidence is contradictory or too thin to apply either of the above labels.

Human lives lost Recherchable dans l'Application Discover

Découvrir:
  • false
    92 Incidents
  • true
    8 Incidents

Définition: Marked "trur" if one or more people died as a result of the accident, "false" if there is no evidence of lives being lost, "unclear" otherwise.

Critical infrastructure sectors affected Recherchable dans l'Application Discover

Découvrir:
  • Transportation
    10 Incidents
  • Healthcare and public health
    4 Incidents
  • Communications
    2 Incidents
  • Government facilities
    2 Incidents
  • Financial services
    1 Incident

Définition: Where applicable, this field indicates if the incident caused harm to any of the economic sectors designated by the U.S. government as critical infrastructure.

Public sector deployment Recherchable dans l'Application Discover

Découvrir:
  • false
    88 Incidents
  • true
    12 Incidents

Définition: "Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.

Nature of end user Recherchable dans l'Application Discover

Découvrir:
  • Amateur
    73 Incidents
  • Expert
    18 Incidents

Définition: "Expert" if users with special training or technical expertise were the ones meant to benefit from the AI system(s)’ operation; "Amateur" if the AI systems were primarily meant to benefit the general public or untrained users.

Level of autonomy Recherchable dans l'Application Discover

Découvrir:
  • Medium
    36 Incidents
  • High
    31 Incidents
  • Low
    14 Incidents
  • Unclear/unknown
    10 Incidents

Définition: The degree to which the AI system(s) functions independently from human intervention. "High" means there is no human involved in the system action execution; "Medium" means the system generates a decision and a human oversees the resulting action; "low" means the system generates decision-support output and a human makes a decision and executes an action.

Physical system Recherchable dans l'Application Discover

Découvrir:
  • Software only
    66 Incidents
  • Vehicle/mobile robot
    16 Incidents
  • Consumer device
    7 Incidents
  • Unknown/unclear
    2 Incidents
  • Other:Medical system
    1 Incident

Définition: Where relevant, indicates whether the AI system(s) was embedded into or tightly associated with specific types of hardware.

Causative factors within AI system Recherchable dans l'Application Discover

Découvrir:
  • Specification
    45 Incidents
  • Robustness
    35 Incidents
  • Unknown/unclear
    22 Incidents
  • Assurance
    16 Incidents

Définition: Indicates which, if any, of the following types of AI failure describe the incident: "Specification," i.e. the system's behavior did not align with the true intentions of its designer, operator, etc; "Robustness," i.e. the system operated unsafely because of features or changes in its environment, or in the inputs the system received; "Assurance," i.e. the system could not be adequately monitored or controlled during operation.

Full description of the incident

Définition: A plain-language description of the incident in one paragraph or less.

Short description of the incident

Définition: A one-sentence description of the incident.

Description of AI system involved

Définition: A brief description of the AI system(s) involved in the incident, including the system’s intended function, the context in which it was deployed, and any available details about the algorithms, hardware, and training data involved in the system.

Beginning date

Définition: The date the incident began.

Ending date

Définition: The date the incident ended.

Total financial cost

Définition: The stated or estimated financial cost of the incident, if reported.

Laws covering the incident

Définition: Relevant laws under which entities involved in the incident may face legal liability as a result of the incident.

Description of the data inputs to the AI systems

Définition: A brief description of the data that the AI system(s) used or were trained on.

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • ecd56df