Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Incident 1161: Airbnb Host Reportedly Accused of Using Purportedly AI‑Altered Photos in False Damage Claim

Description: A London-based Airbnb guest was reportedly accused by a New York host of causing over £12,000 in damages. The host reportedly submitted photos of the alleged damage, which the guest reportedly claims were digitally manipulated or possibly AI‑generated. Airbnb is reported to have initially sided with the host, ordering payment, but later refunded the guest in full following an appeal.

Tools

New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History

Entities

View all entities
Alleged: Unknown AI image-editing technology developer developed an AI system deployed by Unnamed Airbnb host, which harmed Unnamed Airbnb guest and Airbnb customers.
Alleged implicated AI system: Unknown AI image-editing technology

Incident Stats

Incident ID
1161
Report Count
1
Incident Date
2025-08-02
Editors
Daniel Atherton

Incident Reports

Reports Timeline

+1
Airbnb guest says images were altered in false £12,000 damage claim
Airbnb guest says images were altered in false £12,000 damage claim

Airbnb guest says images were altered in false £12,000 damage claim

theguardian.com

Airbnb guest says images were altered in false £12,000 damage claim
theguardian.com · 2025

Airbnb has apologised to a woman after an apartment host falsely claimed she had caused thousands of pounds' worth of damage and used images she says were digitally manipulated to back up his allegations.

The London-based academic was refun…

Variants

A "variant" is an AI incident similar to a known case—it has the same causes, harms, and AI system. Instead of listing it separately, we group it under the first reported incident. Unlike other incidents, variants do not need to have been reported outside the AIID. Learn more from the research paper.
Seen something similar?

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Three Make-Up Artists Lost Jobs Following Black-Box Automated Decision by HireVue

Three Make-Up Artists Lost Jobs Following Black-Box Automated Decision by HireVue

Mar 2022 · 2 reports
Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Jul 2017 · 6 reports
Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Oct 2019 · 6 reports
Previous IncidentNext Incident

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents

Three Make-Up Artists Lost Jobs Following Black-Box Automated Decision by HireVue

Three Make-Up Artists Lost Jobs Following Black-Box Automated Decision by HireVue

Mar 2022 · 2 reports
Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Jul 2017 · 6 reports
Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities

Oct 2019 · 6 reports

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • a9df9cf