Associated Incidents
David Mayer wasn't a particularly well-known name until last week, when it was propelled into the internet spotlight. The reason wasn't anything a person named David Mayer said or did, but rather the way the generative AI chatbot ChatGPT treated any mention of the name.
Users who asked ChatGPT "Who is David Mayer?" were surprised to find that the chatbot abandoned its response partway through answering the question, scrubbing all the text that had gone before it and throwing up an error message: "I'm unable to produce a response."
As news of the mystery spread and users discovered that a handful of other names provoked a similar response, internet sleuths set to work. OpenAI, the company behind ChatGPT, initially released a statement saying, "There may be instances where ChatGPT does not provide certain information about people to protect their privacy." Later it added, "One of our tools mistakenly flagged this name and prevented it from appearing in responses, which it shouldn't have." A spokesman for OpenAI confirmed that the glitch had been fixed.
But it was too late to stop the widespread online speculation that OpenAI was trying to prevent the chatbot from providing answers that could land the company in legal hot water. In recent years, several people have publicly criticized or sued OpenAI alleging that ChatGPT defamed them by producing false statements about them. They include Jonathan Turley, a law professor at George Washington University whom ChatGPT accused of sexual harassment; Brian Hood, a mayor in Australia whom ChatGPT accused of bribery; and Mark Walters, a Georgia radio host whom ChatGPT claimed had embezzled money. No evidence exists to support any of the AI allegations.
Was the David Mayer ban meant to stop ChatGPT from inventing things about a real person with that name? Keen-eyed data protection experts believed this could be the case after discovering that other names could also prompt the same glitch. One is Guido Scorza, an Italian attorney who regularly stress-tests data-protection rules by using himself as an example. Another forbidden name is Jonathan Turley, the American law professor. OpenAI may well have decided to avoid the issue happening again by introducing a ban on the chatbot mentioning him at all.
With the name David Mayer, however, there was no obvious real-world candidate. Many online commenters proposed David Mayer de Rothschild, a British film producer who is a member of the famous banking family. But it seemed unlikely that the ban on the name was related to him, since ChatGPT happily responded to queries about "David de Rothschild." Others suggested the David Mayer in question could be a historian who died in 2023 and who made headlines in 2018 when official agencies confused him with a Chechen terrorist who used the same name as an alias.
Either Mayer could, in theory, have taken advantage of the "right to be forgotten" under EU law by filing a request to remove their names from internet tools like search engines and chatbots. But Rothschild told the Guardian that he hasn't asked for his name to be removed from ChatGPT, saying the speculation was "being driven by conspiracy theories."
"Despite the theories, I don't think there is any intentionality here," said Celia Lam, professor in media and cultural studies, and an expert on celebrity, at the University of Nottingham Ningbo China. According to OpenAI policy, anyone can ask the company to remove or limit access to data about them. But "privacy is not as simple as removing your name from the internet or blocking access," said Lam. "Data about individuals that is dispersed through their online interactions and transactions can be reconstituted through sophisticated generative AI tools."
Even if the David Mayer ban was the result of a simple glitch, the case highlights a fundamental challenge for data protection in the age of AI, explains Reuben Binns, professor of human-centered computing and AI at the University of Oxford: "We need to find ways to make names that are shared with multiple people unique, because obviously, there's lots of David Mayers out there."
Binns knows of what he speaks. He was once quoted in an online article opining on whether the actor Robbie Coltrane could be replaced by an AI avatar in movies made after his death. Except he didn't say it, and the website on which the article was hosted was chock-full of AI-generated content.
The underlying technology for ChatGPT and other chatbots based on large language models is the neural network, which is meant to replicate in silicon the processes of the human brain. But because the companies behind these AI models don't fully understand how they work under the hood, they're forced to rely on an older method to introduce checks and balances: a rule-based system.
ChatGPT's behavior when it refuses to display the name David Mayer suggests it's been given a blunt rule more akin to a past era of computing: IF "David Mayer" is mentioned, THEN frantically delete what you've written and PRINT "I'm unable to produce a response."
ChatGPT has captured the world's imagination over the last two years by spilling out its sentences like a human. But in these instances, it slams up against its hard-coded guardrails and reverts to a much more basic approach: Computer says no. It's a jarring, discombobulating experience.
Despite the eye-popping valuations of AI companies and the awe-inspired headlines about machine intelligence, the David Mayer story suggests that the whole enterprise rests on a poorly controlled artifice. As more of us become aware of the ways in which we can inadvertently end up in a chatbot's response, we could well encounter more David Mayer-like issues going forward. So get used to the glitches.