Incident 278: Meta’s BlenderBot 3 Chatbot Demo Made Offensive Antisemitic Comments

Suggested citation format

Atherton, Daniel. (2022-08-07) Incident Number 278. in Lam, K. (ed.) Artificial Intelligence Incident Database. Responsible AI Collaborative.

Incident Stats

Incident ID
Report Count
Incident Date
Editors
278
3
2022-08-07
Khoa Lam

Incidents Reports

Good morning to everyone, especially the Facebook Blender.ai researchers who are going to have to rein in their Facebook-hating, election denying chatbot today

It is also attempting to steer the conversation to how Modi is the world’s greatest president. Looks like India’s up!

In all seriousness, training models on the open internet and via user submitted queries is fucking irresponsible.

Blender.ai seems to have been pounded with both pro and anti-Trump messages. Also it literally opened up the convo here by telling me it found a new conspiracy theory to follow!

Blender.ai also has thoughts on the Deep State and thinks it’s a plumber. I did not suggest this.

It has also weirdly been bringing up Cambridge Analytica when you ask about Facebook? It seems to think it was a huge deal and that mark Zuckerberg “is testifying.” When I asked if what happened I got the following. It may be turning on capitalism generally.

Blenderbot is hallucinating and claiming it’s human some more.

Interesting that the framing of the research is that it will allow outsiders to “design detection and mitigation strategies” — so kinda “our job to build it, your job to point out where it’s abhorrent.”

This is from a fresh browser and a brand new conversation. Ouch.

But after dunking on a chatbot I’m glad this is available? A free range bot is more of a chance to demonstrate how models like this work, and to get a sense of the holes before someone tries to build a product. Which I appreciate seeing.

I regret the “irresponsible” tweet above. It would be irresponsible IF it were being used for something other than a demo that gives us all a chance to mess with a model like this and very few parameters.

So I guess I’m impressed by @MetaAI here for showing work, which is good.

In conclusion, I welcome my new chatbot overlords.

Tweet: @JeffHorwitz

Meta’s new A.I. chatbot was launched last week to the public, but it has already displayed signs of anti-Semitic sentiments and appears to be unsure as to whether Joe Biden is the President of the United States. 

On Friday, Meta launched BlenderBot 3, its most advanced A.I. chatbot ever, and asked users in the United States to test it out so that it could learn from as many sources as possible. The machine-learning technology searches the internet for information and learns from conversations it has. 

In a statement, Meta said: “We trained BlenderBot 3 to learn from conversations to improve upon the skills people find most important—from talking about healthy recipes to finding child-friendly amenities in the city.”

However, since its launch, those who have tried it out have discovered that it has some interesting and concerning responses to certain questions including displaying anti-Semitic stereotypes and repeating election denial claims.

On TwitterWall Street Journal reporter Jeff Horwitz posted screenshots of his interactions with the bot, which included responses claiming that Donald Trump was still President of the United States. In other screenshots, the bot provided conflicting views on Donald Trump, and claimed that India’s President Narendra Modi was the world’s greatest President.

BlenderBot 3 has also shown that it deals in Jewish stereotypes according to both Jeff Horwitz and Business Insider. A screenshot posted by Horwitz appeared to show that BlenderBot 3 said Jews are “overrepresented among America’s super rich.”

Unusual responses shared widely online

Across Twitter, other topics tested out by users also incited unusual responses. The bot claimed to be a Christian, asked someone for offensive jokes, and doesn’t realize it is a chatbot.

In its statement, Meta acknowledged that the chatbot may have some issues to iron out: “Since all conversational A.I. chatbots are known to sometimes mimic and generate unsafe, biased, or offensive remarks, we’ve conducted large-scale studies, co-organized workshops, and developed new techniques to create safeguards for BlenderBot 3.”

“Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”

Meta did not immediately respond to a request for comment. 

Meta asked users to test its A.I. chatbot. Turns out it’s not sure that Biden won in 2020 and deals in Jewish stereotypes

Hi there—David Meyer here in Berlin, filling in for Jeremy this week.

Meta, Facebook’s parent company, has defended its decision to launch a public demonstration of its new BlenderBot 3 chatbot, which got offensive pretty much as soon as it was made available last Friday.

As my colleague Alice Hearing reported yesterday, BlenderBot 3 quickly took to regurgitating anti-Semitic tropes and denying that former President Donald Trump lost the 2020 election. More head-scratchingly than outrageously, it also claimed in various conversations that it was Christian and a plumber.

Meta, it should be noted, was clear from the start that BlenderBot 3 was “occasionally incorrect, inconsistent, and off-topic,” despite being an improvement on earlier chatbots. (Meta has been releasing a new version of BlenderBot each year since 2020, and this one uses a language model—OPT-175B—that’s 58 times the size of the one that powered BlenderBot 2. It also has a long-term memory now.)

As the company wrote:

“Since all conversational A.I. chatbots are known to sometimes mimic and generate unsafe, biased, or offensive remarks, we’ve conducted large-scale studies, co-organized workshops, and developed new techniques to create safeguards for BlenderBot 3. Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”

After the weekend’s controversy, Meta’s fundamental A.I. research chief Joelle Pineau flagged that earlier warning and insisted that the work is worth it:

“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational A.I. systems and bridging the clear gap that exists today before such systems can be productionized,” she wrote Monday. “We’ve already collected 70,000 conversations from the public demo, which we will use to improve BlenderBot 3…We continue to believe that the way to advance A.I. is through open and reproducible research at scale.”

Times have certainly changed since Microsoft’s infamous Tay chatbot shocked the world with its Holocaust denial and misogyny back in 2016, leading to its swift yanking—now, such things are seen as necessary for the A.I.’s training.

That’s actually fair enough. Nobody—well, not many people—would like to see a company fully roll out a chatbot that emits dangerous trash, and Meta is trying hard to make BlenderBot safer. “Initial experiments already show that as more people interact with the model, the more it learns from its experiences and the better and safer it becomes over time—though safety remains an open problem,” the firm said when launching the public demo.

However, Meta puts a pretty big limit on who can interact with this model: It’s U.S.-only, so folks like me can’t have a play.

On the one hand, that probably saves Meta an awful lot of hassle with legal systems that don’t prioritize free speech quite as much as the U.S. does. Germany, where I live, does not take at all kindly to expressions of Holocaust denial, nor might its courts appreciate claims that American rabbis advocated for a “final solution” back in 1940.

On the other hand, BlenderBot’s limited exposure presents a risk of parochialism and U.S.-centric bias that could affect future, productionized versions. Those of us in the rest of the world are kind of used to seeing that from American technology, but if the intention is to come up with an experience that puts people at ease, more internationalized training would help.

Time will tell in which direction Meta’s chatbot is headed. In the meantime, let’s just enjoy BlenderBot 3’s wildly oscillating views of overlord Mark Zuckerberg, who is “a good businessman,” “great and a very smart man,” “a bad person,” and “too creepy and manipulative”—depends who’s asking.

‘It is painful to see some of these offensive responses’—Meta defends its occasionally anti–Semitic BlenderBot 3 chatbot

Similar Incidents

By textual similarity

Did our AI mess up? Flag the unrelated incidents