
Hi there—David Meyer here in Berlin, filling in for Jeremy this week.
Meta, Facebook’s parent company, has defended its decision to launch a public demonstration of its new BlenderBot 3 chatbot, which got offensive pretty much as soon as it was made available last Friday.
As my colleague Alice Hearing reported yesterday, BlenderBot 3 quickly took to regurgitating anti-Semitic tropes and denying that former President Donald Trump lost the 2020 election. More head-scratchingly than outrageously, it also claimed in various conversations that it was Christian and a plumber.
Meta, it should be noted, was clear from the start that BlenderBot 3 was “occasionally incorrect, inconsistent, and off-topic,” despite being an improvement on earlier chatbots. (Meta has been releasing a new version of BlenderBot each year since 2020, and this one uses a language model—OPT-175B—that’s 58 times the size of the one that powered BlenderBot 2. It also has a long-term memory now.)
As the company wrote:
“Since all conversational A.I. chatbots are known to sometimes mimic and generate unsafe, biased, or offensive remarks, we’ve conducted large-scale studies, co-organized workshops, and developed new techniques to create safeguards for BlenderBot 3. Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”
After the weekend’s controversy, Meta’s fundamental A.I. research chief Joelle Pineau flagged that earlier warning and insisted that the work is worth it:
“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational A.I. systems and bridging the clear gap that exists today before such systems can be productionized,” she wrote Monday. “We’ve already collected 70,000 conversations from the public demo, which we will use to improve BlenderBot 3…We continue to believe that the way to advance A.I. is through open and reproducible research at scale.”
Times have certainly changed since Microsoft’s infamous Tay chatbot shocked the world with its Holocaust denial and misogyny back in 2016, leading to its swift yanking—now, such things are seen as necessary for the A.I.’s training.
That’s actually fair enough. Nobody—well, not many people—would like to see a company fully roll out a chatbot that emits dangerous trash, and Meta is trying hard to make BlenderBot safer. “Initial experiments already show that as more people interact with the model, the more it learns from its experiences and the better and safer it becomes over time—though safety remains an open problem,” the firm said when launching the public demo.
However, Meta puts a pretty big limit on who can interact with this model: It’s U.S.-only, so folks like me can’t have a play.
On the one hand, that probably saves Meta an awful lot of hassle with legal systems that don’t prioritize free speech quite as much as the U.S. does. Germany, where I live, does not take at all kindly to expressions of Holocaust denial, nor might its courts appreciate claims that American rabbis advocated for a “final solution” back in 1940.
On the other hand, BlenderBot’s limited exposure presents a risk of parochialism and U.S.-centric bias that could affect future, productionized versions. Those of us in the rest of the world are kind of used to seeing that from American technology, but if the intention is to come up with an experience that puts people at ease, more internationalized training would help.
Time will tell in which direction Meta’s chatbot is headed. In the meantime, let’s just enjoy BlenderBot 3’s wildly oscillating views of overlord Mark Zuckerberg, who is “a good businessman,” “great and a very smart man,” “a bad person,” and “too creepy and manipulative”—depends who’s asking.