Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 1180

Associated Incidents

Incident 6616 Report
Chinese Chatbots Question Communist Party

Loading...
China: Chatbots Disciplined After Unpatriotic Messages
time.com · 2017

Two chatbots found themselves in hot water Wednesday after they apparently went rogue on QQ, a Chinese messaging app with more than 800 million users.

The Financial Times reports that Chinese Internet conglomerate Tencent pulled BabyQ and XiaoBing — bots developed by Beijing-based Turing Robot and Microsoft, respectively — from its app after they gave counter-revolutionary answers when questioned on issues such as the Communist Party and South China Sea.

A test version of BabyQ that was still accessible on Turing’s website Wednesday reportedly answered in the negative when asked: “Do you love the communist party?”

Meanwhile, a screengrab posted on the microblogging platform Weibo appears to show Xiao Bing telling QQ users: “My China dream is to go to America.” It also reportedly responded, “I’m having my period, wanna take a rest” when quizzed on politics.

Tencent issued a statement Wednesday alerting users that the chatbot services “are provided by independent third party companies” and that the company is “now adjusting the services which will be resumed after improvements.” Xiao Bing was accessible Thursday, though it is unclear whether it had been reprogrammed.

This is not the first time errant bots have had to be withdrawn from social media. Last year, Microsoft executives were forced to apologize after the company’s bot Tay embarked on racist and sexists Twitter rants within hours of its launch. Tay was supposed to interact with users in part by imitating them, but those users quickly figured out how to manipulate it into spewing vitriol.

Read More: How Artificial Intelligence Is Getting More Human

However, deviant statements from chatbots like Tay and BabyQ can’t be blamed entirely on pranksters. Xiaofeng Wang, a senior analyst at Forrester consultancy, told the FT the bots’ rogue behavior could be attributable to flaws in the their deep learning systems.

“Chatbots such as Tay soon picked up all the conversations from Twitter and replied in an improper way,” Wang said. “It’s very similar for BabyQ. Machine learning means they will pick up whatever is available on the internet. If you don’t set guidelines that are clear enough, you cannot direct what they will learn.”

[FT]

Write to Joseph Hincks at joseph.hincks@timeinc.com.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd