Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer
Découvrir
Envoyer
  • Bienvenue sur AIID
  • Découvrir les incidents
  • Vue spatiale
  • Vue de tableau
  • Vue de liste
  • Entités
  • Taxonomies
  • Soumettre des rapports d'incident
  • Classement des reporters
  • Blog
  • Résumé de l’Actualité sur l’IA
  • Contrôle des risques
  • Incident au hasard
  • S'inscrire
Fermer

Incident 639: Customer Overcharged Due to Air Canada Chatbot's False Discount Claims

Répondu
Description: Air Canada was ordered to pay over $600 in damages for providing inaccurate bereavement discount information via its chatbot, leading to a customer overpaying for flights. The tribunal ruled the airline responsible for the chatbot's misinformation.

Outils

Nouveau rapportNouveau rapportNouvelle RéponseNouvelle RéponseDécouvrirDécouvrirVoir l'historiqueVoir l'historique

Entités

Voir toutes les entités
Présumé : Un système d'IA développé et mis en œuvre par Air Canada, a endommagé Jake Moffatt.

Statistiques d'incidents

ID
639
Nombre de rapports
4
Date de l'incident
2022-11-11
Editeurs
Daniel Atherton
Applied Taxonomies
MIT

Classifications de taxonomie MIT

Machine-Classified
Détails de la taxonomie

Risk Subdomain

A further 23 subdomains create an accessible and understandable classification of hazards and harms associated with AI
 

7.3. Lack of capability or robustness

Risk Domain

The Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental harms, and (7) AI system safety, failures & limitations.
 
  1. AI system safety, failures, and limitations

Entity

Which, if any, entity is presented as the main cause of the risk
 

AI

Timing

The stage in the AI lifecycle at which the risk is presented as occurring
 

Post-deployment

Intent

Whether the risk is presented as occurring as an expected or unexpected outcome from pursuing a goal
 

Unintentional

Rapports d'incidents

Chronologie du rapport

Incident OccurrenceAir Canada must pay damages after chatbot lies to grieving passenger about discount - Réponse+2
Air Canada must honor refund policy invented by airline’s chatbot
Air Canada must pay damages after chatbot lies to grieving passenger about discount

Air Canada must pay damages after chatbot lies to grieving passenger about discount

theregister.com

Air Canada must honor refund policy invented by airline’s chatbot

Air Canada must honor refund policy invented by airline’s chatbot

arstechnica.com

Air Canada chatbot promised a discount. Now the airline has to pay it.

Air Canada chatbot promised a discount. Now the airline has to pay it.

washingtonpost.com

Airline held liable for its chatbot giving passenger bad advice - what this means for travellers

Airline held liable for its chatbot giving passenger bad advice - what this means for travellers

bbc.com

Air Canada must pay damages after chatbot lies to grieving passenger about discount
theregister.com · 2024
Réponse post-incident de Katyanna Quach

Air Canada must pay a passenger hundreds of dollars in damages after its online chatbot gave the guy wrong information before he booked a flight.

Jake Moffatt took the airline to a small-claims tribunal after the biz refused to refund him f…

Air Canada must honor refund policy invented by airline’s chatbot
arstechnica.com · 2024

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.

On the day Jake Moffatt's grandmother di…

Air Canada chatbot promised a discount. Now the airline has to pay it.
washingtonpost.com · 2024
Réponse post-incident de Kyle Melnick

After his grandmother died in Ontario a few years ago, British Columbia resident Jake Moffatt visited Air Canada's website to book a flight for the funeral. He received assistance from a chatbot, which told him the airline offered reduced r…

Airline held liable for its chatbot giving passenger bad advice - what this means for travellers
bbc.com · 2024
Réponse post-incident de Maria Yagoda

Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada's chatbot promised a discount that wasn't available to…

Variantes

Une "Variante" est un incident qui partage les mêmes facteurs de causalité, produit des dommages similaires et implique les mêmes systèmes intelligents qu'un incident d'IA connu. Plutôt que d'indexer les variantes comme des incidents entièrement distincts, nous listons les variations d'incidents sous le premier incident similaire soumis à la base de données. Contrairement aux autres types de soumission à la base de données des incidents, les variantes ne sont pas tenues d'avoir des rapports en preuve externes à la base de données des incidents. En savoir plus sur le document de recherche.

Incidents similaires

Par similarité textuelle

Did our AI mess up? Flag the unrelated incidents

Uber AV Killed Pedestrian in Arizona

Tempe police release report, audio, photo

Mar 2018 · 25 rapports
Defamation via AutoComplete

Algorithmic Defamation: The Case of the Shameless Autocomplete

Apr 2011 · 28 rapports
A Collection of Tesla Autopilot-Involved Crashes

Tesla Model X in Autopilot Killed a Driver. Officials Aren’t Pleased With How Tesla Handled It.

Jun 2016 · 22 rapports
Incident précédentProchain incident

Incidents similaires

Par similarité textuelle

Did our AI mess up? Flag the unrelated incidents

Uber AV Killed Pedestrian in Arizona

Tempe police release report, audio, photo

Mar 2018 · 25 rapports
Defamation via AutoComplete

Algorithmic Defamation: The Case of the Shameless Autocomplete

Apr 2011 · 28 rapports
A Collection of Tesla Autopilot-Involved Crashes

Tesla Model X in Autopilot Killed a Driver. Officials Aren’t Pleased With How Tesla Handled It.

Jun 2016 · 22 rapports

Recherche

  • Définition d'un « incident d'IA »
  • Définir une « réponse aux incidents d'IA »
  • Feuille de route de la base de données
  • Travaux connexes
  • Télécharger la base de données complète

Projet et communauté

  • À propos de
  • Contacter et suivre
  • Applications et résumés
  • Guide de l'éditeur

Incidents

  • Tous les incidents sous forme de liste
  • Incidents signalés
  • File d'attente de soumission
  • Affichage des classifications
  • Taxonomies

2024 - AI Incident Database

  • Conditions d'utilisation
  • Politique de confidentialité
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • ecd56df