Description: A driverless metro train in Delhi, India crashed during a test run due to faulty brakes.
Entities
View all entitiesAlleged: unknown developed an AI system deployed by , which harmed Delhi Metro Rail Corporation.
CSETv1 Taxonomy Classifications
Taxonomy DetailsIncident Number
The number of the incident in the AI Incident Database.
31
CSETv0 Taxonomy Classifications
Taxonomy DetailsPublic Sector Deployment
"Yes" if the AI system(s) involved in the accident were being used by the public sector or for the administration of public goods (for example, public transportation). "No" if the system(s) were being used in the private sector or for commercial purposes (for example, a ride-sharing company), on the other.
Yes
Infrastructure Sectors
Where applicable, this field indicates if the incident caused harm to any of the economic sectors designated by the U.S. government as critical infrastructure.
Transportation
Lives Lost
Were human lives lost as a result of the incident?
No
Intent
Was the incident an accident, intentional, or is the intent unclear?
Accident
Near Miss
Was harm caused, or was it a near miss?
Near miss
Ending Date
The date the incident ended.
2017-12-19T08:00:00.000Z
GMF Taxonomy Classifications
Taxonomy DetailsKnown AI Goal Snippets
One or more snippets that justify the classification.
(Snippet Text: New Delhi, Dec 20 (IANS) The Delhi Metro on Wednesday sacked four of its officials, including a Deputy General Manager, for Tuesday's accident in which a metro train rammed through a wall after failure of its brakes., Related Classifications: Autonomous Driving)
Risk Subdomain
A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
7.3. Lack of capability or robustness
Risk Domain
The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
- AI system safety, failures, and limitations
Entity
Which, if any, entity is presented as the main cause of the risk
Human
Timing
The stage in the AI lifecycle at which the risk is presented as occurring
Pre-deployment
Intent
Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
Unintentional