Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 1578

関連インシデント

インシデント 1836 Report
Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers

Loading...
Airbnb blasted for using algorithm that judges if users are ‘trustworthy’
thenewdaily.com.au · 2022

Airbnb may be using automated decision making to boot users from the short-term rental platform, based on factors like social media, employment history and your IP address.

Consumer advocacy group Choice called out Airbnb in a report questioning the lack of transparency around its use of a “secretive algorithm” that judges if users are ‘trustworthy’.

According to Choice, Airbnb bought background-check startup Trooly in 2017.

Since then, it has reportedly updated the patent several times, suggesting it is in use.

As far as the fine print goes, Airbnb “may conduct profiling” using your interactions with the platform, as well as information obtained by third parties.

Its privacy policy says automated processes, which analyse users’ activities on and off Airbnb, could restrict or suspend access to the platform.

“Choice is concerned that businesses are implementing automated decision making widely without informing consumers of the risks, offering avenues to opt out or opportunity to review decisions,” said Kate Bower, consumer data advocate at Choice.

Can you trust the ‘trustworthy’?

Generally speaking, Dr Marc Cheong, senior lecturer in information systems (digital ethics) at the University of Melbourne, said such automated decision making poses two concerns.

“The main issue is those affected by this algorithm may not have an opportunity to appeal or seek recourse about the actions taken by the AI,” Dr Cheong told The New Daily.

The second concern is whether or not the system is even collecting accurate data.

If you have a machine trawling through everything about you online, how do you make sure it’s correct?

Can you trust the ‘trustworthiness’ algorithm?

Choice spoke to Australians who have been booted from Airbnb, despite receiving good reviews.

Renae Macheda, who described herself and her husband as “clean, boring people”, said she received no real explanation for the ban.

“To give nothing at all and no options to try and remedy whatever it is, it’s not really good enough,” she told Choice.

‘Guilty by association’

What about pictures or comments you are tagged in online?

Dr Cheong said the affected user would then be “guilty by association”, as it is the activities of the people in your social network that affect you, instead of them.

Professor of Law Jeannie Paterson, director of the Centre for AI and Digital Ethics at the University of Melbourne, said algorithmic decision making is not only difficult to understand, it can entrench or amplify existing biases and lead to discrimination.

“The idea that you can determine ‘trustworthiness’ from someone’s social media presence has got to be ‘junk science’, to quote the previous human rights commissioner Ed Santow,” Professor Paterson told TND.

The case for law reform

As Professor Paterson explained, Airbnb is a private company.

“It can decide who does and does not use its properties.”

Unfortunately, as the law exists now, people who get thrown off the platform have very few options.

“You’re wrongly accused of being disrupted and you respond, ‘I’m not’, but you’ve got no way of challenging that decision,” Professor Paterson said.

“That decision is made by an algorithm and Airbnb doesn’t appear to provide a process to appeal that, unlike Instagram or TikTok.”

Although a customer could argue, albeit with difficulty, that their removal was unfair, Professor Paterson said the strongest option is championing law reform.

“So shining a light on these practices is really important because at the end of the day, businesses like Airbnb thrive on reputation and if there is a reputation rift in using this unfair process, it might be a prompt for them to clean up their act.”

Professor Paterson said social media and the platform economy are part of our lives now, so it’s important that businesses – and it isn’t just Airbnb using automated decision making – recognise they need good governance.

Somewhat optimistic, Dr Cheong said responsible use of AI needs to have, at a minimum, human oversight and auditing frameworks to restore confidence.

There must be a clear and transparent process for contesting decisions that affect people.

Airbnb did not respond to The New Daily‘s request for comment in time for publication.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd