Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 3702

関連インシデント

インシデント 6365 Report
AI Romance Apps Reportedly Compromise User Privacy for Data Harvesting

Loading...
AI girlfriends will only break your heart, privacy experts warn
businessinsider.com · 2024

There's a potentially dangerous reality looming beneath the veneer of AI romance, according to a new Valentine's Day-themed study, which concluded that the chatbots can be a privacy nightmare.

Internet nonprofit The Mozilla Foundation took stock of the burgeoning landscape, reviewing 11 chatbots and concluding that all were untrustworthy — falling within the worst category of products it reviews for privacy.

"Although they are marketed as something that will enhance your mental health and well-being," researcher Misha Rykov wrote of romantic chatbots in the report, "they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you."

According to its survey of the space, 73% of the apps don't share how they manage security vulnerabilities, 45% allow weak passwords, and all but one (Eva AI Chat Bot & Soulmate) share or sell personal data.

Furthermore, the privacy policy for CrushOn.AI states it can collect information on users' sexual health, prescription meds, and gender-affirming care, per the Mozilla Foundation.

Some apps feature chatbots whose character descriptions feature violence or underage abuse, while others warned that the bots could be unsafe or hostile.

The Mozilla Foundation noted that in the past, apps had encouraged dangerous behavior, including suicide (Chai AI) and an assassination attempt on the late Queen Elizabeth II (Replika).

Chai AI and CrushOn.AI didn't respond to Business Insider's request for comment. A representative for Replika told BI: "Replika has never sold user data and does not, and has never, supported advertising either. The only use of user data is to improve conversations."

An EVA AI spokesperson told BI that it was reviewing its password policies to provide better user protection, but that it works to keep "meticulous control" of its language models.

EVA said it prohibits discussion of an array of topics including pedophilia, suicide, zoophilia, political and religious opinions, sexual and racial discrimination, and many more.

For those who find the prospect of AI romance impossible to resist, the Mozilla Foundation urges several precautions, including not saying anything you wouldn't want a colleague or family member to read, using a strong password, opting out of AI training, and limiting the app's access to other mobile features such as your location, microphone, and camera.

"You shouldn't have to pay for cool new technologies with your safety or your privacy," the report concluded.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd