Incidents associés

Days after wowing the world with incredibly photorealistic demos of its new AI video generation model, Sora, OpenAI is hitting a big roadblock.
The startup’s signature product, ChatGPT, began returning gibberish, nonsensical outputs to users yesterday afternoon, Tuesday, Feb. 20, 2024, and many took to X (formerly known as Twitter) to complain.
Some of ChatGPT’s outputs mixed Spanish and English unintelligibly, while others made up words or repeated phrases over and over, despite the large language model (LLM) powered chatbot not being asked to do so.
Appreciate that few of the responses to this have been "oh no, the supermind has gone off the rails!" and are more like "oh the stolen content laundering machine that's supposed to replace me is glitching up? quelle surprise." https://t.co/hirVglryOY
— Alex Hanna (@alexhanna) February 21, 2024
One astute user compared the seemingly random strings of disconnected words to the unsettling “weird horror” extraterrestrial graffiti from Jeff VanderMeer’s seminal 2014 novel Annihilation, and it’s true from the perspective of this reader of both that they bear a similarly eerie quality of an inhuman intelligence that appears out of whack or illogical.
While various respondents joked about the strange outputs being the beginning of a “robot uprising” such as that depicted in numerous sci-fi film franchises including Terminator and The Matrix, others shrugged them off as glitches, and many pointed out that they undermined the idea of generative AI tools performing as well as human beings for writing and code generating activities.
As of 3:40 p.m. PST on Feb. 20, OpenAI was aware of the issue and posted as much on its public status dashboard webpage. By 3:47 p.m. PST, the company said it had identified the issue and was “remediating” it. And by almost 5 p.m. PST, the company said it was “continuing to monitor the situation.”
Today at 10:30 a.m. PST, the official verified ChatGPT account on X posted: “went a little off the rails yesterday but should be back and operational!”
went a little off the rails yesterday but should be back and operational!
— ChatGPT (@ChatGPTapp) February 21, 2024
Later, the ChatGPT account posted a screenshot of a post-mortem update on the incident from OpenAI’s website, in which the company stated “an optimization to the user experience introduced a bug with how the model processes language,” but that it had “rolled out a fix” that resolved it.
— ChatGPT (@ChatGPTapp) February 22, 2024
Still, even with a quick fix, the erroneous and rambling answers out of nowhere couldn’t help but make myself and others question the underlying reliability and integrity of ChatGPT and using it or other OpenAI products such as the LLMs powering it — GPT-4 and GPT-3.5 — for enterprise uses, especially “safety critical” tasks such as those in transportation, healthcare/medicine, power and engineering.
The level of non-determinism and unhinged-ness of this makes me think a few things:– We need far more interpretability before using these in safety-critical places
– Public interest may deserve the right to interrogate an un-RLHF-d foundation model to see whats lurking https://t.co/9fbdrbfGne