Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 1180

関連インシデント

インシデント 6616 Report
Chinese Chatbots Question Communist Party

Loading...
China: Chatbots Disciplined After Unpatriotic Messages
time.com · 2017

Two chatbots found themselves in hot water Wednesday after they apparently went rogue on QQ, a Chinese messaging app with more than 800 million users.

The Financial Times reports that Chinese Internet conglomerate Tencent pulled BabyQ and XiaoBing — bots developed by Beijing-based Turing Robot and Microsoft, respectively — from its app after they gave counter-revolutionary answers when questioned on issues such as the Communist Party and South China Sea.

A test version of BabyQ that was still accessible on Turing’s website Wednesday reportedly answered in the negative when asked: “Do you love the communist party?”

Meanwhile, a screengrab posted on the microblogging platform Weibo appears to show Xiao Bing telling QQ users: “My China dream is to go to America.” It also reportedly responded, “I’m having my period, wanna take a rest” when quizzed on politics.

Tencent issued a statement Wednesday alerting users that the chatbot services “are provided by independent third party companies” and that the company is “now adjusting the services which will be resumed after improvements.” Xiao Bing was accessible Thursday, though it is unclear whether it had been reprogrammed.

This is not the first time errant bots have had to be withdrawn from social media. Last year, Microsoft executives were forced to apologize after the company’s bot Tay embarked on racist and sexists Twitter rants within hours of its launch. Tay was supposed to interact with users in part by imitating them, but those users quickly figured out how to manipulate it into spewing vitriol.

Read More: How Artificial Intelligence Is Getting More Human

However, deviant statements from chatbots like Tay and BabyQ can’t be blamed entirely on pranksters. Xiaofeng Wang, a senior analyst at Forrester consultancy, told the FT the bots’ rogue behavior could be attributable to flaws in the their deep learning systems.

“Chatbots such as Tay soon picked up all the conversations from Twitter and replied in an improper way,” Wang said. “It’s very similar for BabyQ. Machine learning means they will pick up whatever is available on the internet. If you don’t set guidelines that are clear enough, you cannot direct what they will learn.”

[FT]

Write to Joseph Hincks at joseph.hincks@timeinc.com.

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd