Welcome to theAI Incident Database
Incident 1374: Purportedly AI-Generated Sepsis Alert Reportedly Prompted Potentially Inappropriate IV Fluid Administration for a Dialysis Patient, Averted by Clinician Intervention
“AI enters the exam room”Latest Incident Report
Adam Hart has been a nurse at St. Rose Dominican Hospital in Henderson, Nev., for 14 years. A few years ago, while assigned to help out in the emergency department, he was listening to the ambulance report on a patient who'd just arrived---an elderly woman with dangerously low blood pressure---when a sepsis flag flashed in the hospital's electronic system.
Sepsis, a life-threatening response to infection, is a major cause of death in U.S. hospitals, and early treatment is critical. The flag prompted the charge nurse to instruct Hart to room the patient immediately, take her vitals and begin intravenous (IV) fluids. It was protocol; in an emergency room, that often means speed.
But when Hart examined the woman, he saw that she had a dialysis catheter below her collarbone. Her kidneys weren't keeping up. A routine flood of IV fluids, he warned, could overwhelm her system and end up in her lungs. The charge nurse told him to do it anyway because of the sepsis alert generated by the hospital's artificial-intelligence system. Hart refused.
A physician overheard the escalating conversation and stepped in. Instead of fluids, the doctor ordered dopamine to raise the patient's blood pressure without adding volume---averting what Hart believed could have led to a life-threatening complication.
What stayed with Hart was the choreography that the AI-generated alert produced. A screen prompted urgency, which a protocol turned into an order; a bedside objection grounded in clinical reasoning landed, at least in the moment, as defiance. No one was acting in bad faith. Still, the tool pushed them to comply when the evidence right in front of them---the patient and her compromised kidneys---demanded the exact opposite. (A hospital spokesperson said that they could not comment on a specific case but that the hospital views AI as "one of the many tools that supports, not supersedes, the expertise and judgment of our care teams.")
That dynamic is becoming familiar in U.S. health care. Over the past several years hospitals have woven algorithmic models into routine practice. Clinical care often relies on matching a patient's symptoms against rigid protocols---an environment ideal for automation. For an exhausted workforce, the appeal of handing off routine tasks such as documentation to AI is undeniable.
The technologies already implemented span a spectrum from predictive models that calculate simple risk scores to agentic AI that promises autonomous decision-making---enabling systems to titrate a patient's oxygen flow or reprioritize an ER triage queue with little human input. A pilot project launched in Utah a few months ago uses chatbot technology with agentic capabilities to renew prescriptions, a move proponents say gives providers more time, although physician associations have opposed the removal of human oversight. Across the country, health systems are using similar tools to flag risks, ambiently listen to visits with patients, generate clinical notes, monitor patients via wearable devices, match participants to clinical trials, and even manage the logistics of operating rooms and intensive care unit transfers.
Nurses saw how an imperfect product could become policy---and then become their problem.
The industry is chasing a vision of truly continuous care: a decision-making infrastructure that keeps tabs on patients between appointments by combining what's in the medical record---laboratory test results, imaging, notes, meds---with population data and with the data people generate on their own by using, for instance, wearables and food logs. It watches for meaningful changes, sends guidance or prompts, and flags cases that need human input. Proponents argue this kind of data-intensive, always-on monitoring is beyond the cognitive scope of any human provider.
Others say clinicians must stay in the loop, using AI not as autopilot but as a tool to help them make sense of vast troves of data. Last year Stanford Medicine rolled out ChatEHR, a tool that allows clinicians to "chat" with a patient's medical records. One physician shared that the tool found critical information buried in the records of a cancer patient, which helped a team including six pathologists to give a definitive diagnosis. "If that doesn't prove the value of EHR, I don't know what does," they reported.
At the same time, on many hospital floors these digital promises often fracture, according to Anaeze Offodile, chief strategy officer at Memorial Sloan Kettering Cancer Center in New York City. He notes that faulty algorithms, poor implementation and low return on investment have caused some projects to stall. On the ground, nurses, who are tasked with caring for patients, are increasingly wary of unvalidated tools. This friction has moved from the ward into the streets. In the past two years nurses in California and New York City have staged demonstrations to draw attention to unregulated algorithmic tools entering the health-care system, arguing that while hospitals invest in AI the bedside remains dangerously short-staffed.
Sepsis prediction has become a cautionary case. Hospitals across the U.S. widely adopted information health technology company Epic's sepsis-prediction algorithm. Later evaluations found it substantially less accurate than marketed. Epic says that studies in clinical settings have found its sepsis model improved outcomes and that it has since released a second version it claims performs better. Still, nurses saw how an imperfect product could become policy---and then become their problem.
Burnout, staffing shortages and rising workplace violence are already thinning the nursing workforce, according to a 2024 nursing survey. Those pressures spilled onto the steps of New York City Hall last November, when members of the New York State Nurses Association rallied and then testified before the City Council's hospitals committee. They argued that some of the city's biggest private systems are pouring money into executives and AI projects while hospital units remain understaffed and nurses face escalating safety risks. As this story was going to press in mid-January, 15,000 nurses at hospital systems in New York City were on strike, demanding safer staffing levels and workplace protections.
New AI-enabled monitoring models often arrive in hospitals with the same kind of hype that has accompanied AI in other industries. In 2023 UC Davis Health rolled out BioButton in its oncology bone marrow transplant unit, calling it "transformational." The device, a small, hexagonal silicone sensor worn on a patient's chest, continuously tracked vital signs such as heart rate, temperature and breathing patterns.
On the floor it frequently generated alerts that were difficult for nurses to interpret. For Melissa Beebe, a registered nurse who has worked at UC Davis Health for 17 years, the pings offered little actionable data. "This is where it became really problematic," she says. "It was vague." The notifications flagged changes in vital signs without specifics.
Beebe says she often followed alarms that led nowhere. "I have my own internal alerts---'something's wrong with this patient, I want to keep an eye on them'---and then the BioButton would have its own thing going on. It was overdoing it but not really giving great information."
As a union representative for the California Nurses Association at UC Davis Health, Beebe requested a formal discussion with hospital leadership before the devices were rolled out, as allowed by the union's contract. "It's just really hyped: 'Oh, my gosh, this is going to be so transformative, and aren't you so lucky to be able to do it?'" she says. She felt that when she and other nurses raised questions, they were seen as resistant to technology. "I'm a WHY nurse. To understand something, I have to know why. Why am I doing it?"
Among the nurses' concerns were how the device would work on different body types and how quickly they were expected to respond to alerts. Beebe says leadership had few clear answers. Instead nurses were told the device could help with early detection of hemorrhagic strokes, which patients were particularly at risk for on her floor. "But the problem is that heart rate, temperature and respiratory rate, for a stroke, would be some pretty late signs of an issue," she says. "You'd be kind of dying at that point." Earlier signs of a hemorrhagic stroke may be difficulty rousing the patient, slurred speech or balance problems. "None of those things are BioButton parameters."
In the end, UC Davis Health stopped using the BioButtons after piloting the technology for about a year, Beebe says. "What they were finding was that in the patients who were really sick and would benefit from that kind of alert, the nurses were catching it much faster," she explains. (UC Davis Health said in a statement that it piloted BioButton alongside existing monitors and ultimately chose not to adopt it because its alerts did not offer a clear advantage over current monitoring.)
Beebe argues that clinical judgment, shaped by years of training and experience and informed by subtle sensory cues and signals from technical equipment, cannot be automated. "I can't tell you how many times I have that feeling, I don't feel right about this patient. It could be just the way their skin looks or feels to me." Elven Mitchell, an intensive care nurse of 13 years now at Kaiser Permanente Hospital in Modesto, Calif., echoes that view. "Sometimes you can see a patient and, just looking at them, [know they're] not doing well. It doesn't show in the labs, and it doesn't show on the monitor," he says. "We have five senses, and computers only get input."
Clinical care often relies on matching a patient's symptoms against rigid protocols---an environment ideal for automation.
Algorithms can augment clinical judgment, experts say, but they cannot replace it. "The models will never have access to all of the data that the provider has," says Ziad Obermeyer, Blue Cross of California Distinguished Associate Professor of Health Policy and Management at the University of California, Berkeley, School of Public Health. The models are mostly analyzing electronic medical records, but not everything is in the digital file. "And that turns out to be a bunch of really important stuff like, How are they answering questions? How are they walking? All these subtle things that physicians and nurses see and understand about patients."
Mitchell, who also serves on his hospital's rapid-response team, says his colleagues have trouble trusting the alerts. He estimates that roughly half of the alerts generated by a centralized monitoring team are false positives, yet hospital policy requires bedside staff to evaluate each one, pulling nurses away from patients already flagged as high risk. (Kaiser Permanente said in a statement that its AI monitoring tools are meant to support clinicians, with decisions remaining with care teams, and that the systems are rigorously tested and continuously monitored.)
"Maybe in 50 years it will be more beneficial, but as it stands, it is a trying-to-make-it-work system," Mitchell says. He wishes there were more regulation in the space because health-care decisions can, in extreme cases, be about life or death.
Across interviews for this article, nurses consistently emphasized that they are not opposed to technology in the hospital. Many said they welcome tools that are carefully validated and demonstrably improve care. What has made them wary, they argue, is the rapid rollout of heavily marketed AI models whose performance in real-world settings falls short of promises. Rolling out unvalidated tools can have lasting consequences. "You are creating mistrust in a generation of clinicians and providers," warns one expert, who requested anonymity out of concern about professional repercussions.
Concerns extend beyond private vendors. Hospitals themselves are sometimes bypassing safeguards that once governed the introduction of new medical technologies, says Nancy Hagans, nurse and president of the New York State Nurses Association.
The risks are not merely theoretical. Obermeyer, the professor at Berkeley's School of Public Health, found that some algorithms used in patient care turned out to be racist. "They're being used to screen about 100 million to 150 million people every year for these kinds of decisions, so it's very widespread," he says. "It does bring up the question of why we don't have a system for catching those things before they are deployed and start affecting all these important decisions," he adds, comparing the introduction of AI tools in health care to medical drug development. Unlike with drugs, there is no single gatekeeper for AI; hospitals are often left to validate tools on their own.
At the bedside, opacity has consequences: If the alert is hard to explain, the aftermath still belongs to the clinician. If a device performs differently across patients---missing some, overflagging others---the clinician inherits that, too.
Hype surrounding AI has further complicated matters. Over the past couple of years AI-based listening tools that record doctor-patient interactions and generate a clinical note to document the visit spread quickly through health care. Many institutions bought them hoping they'd save clinicians time. Many providers appreciate being freed from taking notes while talking to patients, but emerging evidence suggests the efficiency gains may be modest. Studies have reported time savings ranging from negligible to up to 22 minutes per day. "Everybody rushed in saying these things are magical; they're gonna save us hours. Those savings did not materialize," says Nigam Shah, a professor of medicine at Stanford University and chief data scientist for Stanford Health Care. "What's the return on investment of saving six minutes per day?"
Similar experiences have made some elite institutions wary of relying only on outside companies for algorithmic tools. A few years back Stanford Health Care, Mount Sinai Health System in New York City, and others brought AI development in-house so they could develop their own tools, test tools from vendors, tune them and defend them to clinicians. "It's a strategic redefinition of health-care AI as an institutional capability rather than a commodity technology we purchase," Shah says. At Mount Sinai, that shift has meant focusing less on algorithms themselves and more on adoption and trust---trying to create trust with health-care workers and fitting new tools into the workflow.
AI tools also need to say why they're recommending something and identify the specific signals that triggered the alert, not just present a score. Hospitals need to pay attention to human-machine interactions, says Suchi Saria, John C. Malone Associate Professor of Computer Science at Johns Hopkins University and director of the school's Machine Learning and Healthcare Lab. AI models, she argues, should function more like well-trained team members. "It's not gonna work if this new team member is disruptive. People aren't gonna use it," Saria says. "If this new member is unintelligible, people aren't gonna use it."
Yet many institutions do not consult or co-create with their nurses and other frontline staff when considering or building new AI tools that will be used in patient care. "Happens all the time," says Stanford's Shah. He recalls initially staffing his data-science team with doctors, not nurses, until his institution's chief nursing officer pushed back. He now believes nurses' perspectives are indispensable. "Ask nurses first, doctors second, and if the doctor and nurse disagree, believe the nurse, because they know what's really happening," he says.
To include more staff members in the process of developing AI tools, some institutions have implemented a bottom-up approach in addition to a top-down approach. "Many of the best ideas come from people closest to the work, so we created a process where anyone in the company can submit an idea," says Robbie Freeman, a former bedside nurse and now chief digital transformation officer at Mount Sinai. A wound-care nurse had the great idea to build an AI tool to predict which patients are likely to develop bedsores. The program has a high adoption rate, Freeman says, partly because that nurse is enthusiastically training her peers.
Freeman says the goal is not to replace clinical judgment but to build tools clinicians will use---tools that can explain themselves. In the version nurses want, the alert is an invitation to look closer, not an untrustworthy digital manager.
The next frontier arrived at Mount Sinai's cardiac-catheterization lab last year with a new agentic AI system called Sofiya. Instead of nurses calling patients ahead of a stenting procedure to provide instructions and answer questions, Sofiya now gives them a ring. The AI agent, designed with a "soft-spoken, calming" voice and depicted as a female model in scrubs on life-size promotional cutouts, saved Mount Sinai more than 200 nursing hours in five months, according to Annapoorna Kini, director of the cath lab. But some nurses aren't onboard with Sofiya. Last November, at a New York City Council meeting, Denash Forbes, a nurse at Mount Sinai for 37 years, testified that Sofiya's work must still be checked by nurses to ensure accuracy.
Even Freeman admits there is a "ways to go" until this agentic AI will provide an integrated and seamless experience. Or maybe it will join the ranks of failed AI pilots. As the industry chases the efficiency of autonomous agents, we need an algorithm-testing infrastructure. For now the safety of the patient remains anchored in the very thing AI cannot replicate: the intuition of the human clinician. Like in the case of Adam Hart, who rejected a digital verdict in order to protect a patient's lungs, the ultimate value of the nurse in the age of AI may be not their ability to follow the prompt but their willingness to override it.
Incident 1375: OpenAI Allegedly Did Not Alert RCMP After ChatGPT Flagged Violent Chats Before British Columbia School Shooting
“OpenAI Employees Raised Alarms About Canada Shooting Suspect Months Ago”
Months before Jesse Van Rootselaar became the suspect in the mass shooting that devastated a rural town in British Columbia, Canada, OpenAI considered alerting law enforcement about her interactions with its ChatGPT chatbot, the company said.
While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days, according to people familiar with the matter.
Her posts, flagged by an automated review system, alarmed employees at OpenAI. Internally, about a dozen staffers debated whether to take action on Van Rootselaar's posts. Some employees interpreted Van Rootselaar's writings as an indication of potential real-world violence, and urged leaders to alert Canadian law enforcement about her behavior, the people familiar with the matter said.
OpenAI leaders ultimately decided not to contact authorities.
A spokeswoman for OpenAI said the company banned Van Rootselaar's account but determined that her activity didn't meet the criteria for reporting to law enforcement, which would have required that it constituted a credible and imminent risk of serious physical harm to others.
On Feb. 10, Van Rootselaar was found dead from what appeared to be a self-inflicted injury at the school scene of a mass shooting that killed eight people and left at least 25 injured. The Royal Canadian Mounted Police identified Van Rootselaar, an 18-year-old trans woman, as the suspect.
The company reached out to the RCMP after it learned of the shooting and is supporting its investigation, the spokeswoman said.
"Our thoughts are with everyone affected by the Tumbler Ridge tragedy," the company said in a statement.
Other aspects of Van Rootselaar's digital footprint emerged in the days after the attack, including a videogame she created on the Roblox platform that simulated a mass shooting. On social media, the suspect shared her concerns about the process of transitioning and her interests in anime cartoons and illicit drugs.
Online platforms have long debated how to balance questions of privacy for their users with public safety in their decisions to alert certain users to law enforcement. That debate is now coming for the AI companies that power the chatbots to which people are confiding the most intimate details of their private thoughts and lives.
OpenAI said it trains its models to discourage users from committing real-world harm, and routes conversations in which users express intent of harm to human reviewers, who are able to refer them to law enforcement in cases where they are found to pose an imminent risk of serious physical harm.
The company said it weighs the risk of violence against privacy considerations and the potential distress caused to individuals and families by getting police involved unnecessarily.
Van Rootselaar was already known to local police before the shooting. They visited where she lived multiple times to handle mental-health concerns, and temporarily removed guns from the residence.
A specialized team of investigators has also been combing through her online activity and digital footprint for clues about the mass shooting, as well as reviewing her past interactions with police and mental-health professionals, according to RCMP Commissioner Dwayne McDonald.
Archived social-media posts show Van Rootselaar posted pictures of herself shooting at a gun range, claimed to have created a bullet cartridge using a 3-D printer and engaged in online discussion about YouTube videos made by gun enthusiasts.
Incident 1376: Amazon Delivery Van Reportedly Became Stranded on Essex Mudflats After GPS Routed It Onto the Broomway
“A Deadly Medieval Path in England Claims a Modern Victim: An Amazon Van”
It is considered one of the most treacherous paths in Britain, a medieval route that is said to have caused dozens of deaths over hundreds of years.
And in 2026, it proved to be too much for an Amazon van.
One of the company's delivery vehicles became stranded on Sunday after the driver mistakenly steered onto the Broomway, a six-mile walking path in Essex, in southeast England. The driver was following GPS directions to get to Foulness Island, a restricted military testing area, from the mainland.
The HM Coastguard Southend said in a statement that it received a call on Sunday morning about "an Amazon delivery van that had driven onto the Broomway." Amazon was made aware of the incident, the statement said, and the company "arranged recovery of the vehicle."
A photograph of the scene posted to social media by the HM Coastguard showed a gray Amazon Prime van parked in the mud flats. The driver was able to exit the vehicle, the statement said.
Amazon declined to comment on the specifics of the incident, but said, "Thankfully driver is safe, van retrieved and we are investigating." The company arranged for a nearby farmer to remove the van, the HM Coastguard said.
The Broomway's name comes from the hundreds of brooms that once marked its path. Edwardian newspapers gave it a more sinister nickname: The Doomway.
It has been described as a disorienting place where sea fog and quicksand can make it feel as if the earth has merged with the sea. Access to the path is available only during low tide. Venture onto the path at the wrong time, and rapidly rising tides can quickly become overwhelming.
More than 100 people are thought to have died on the Broomway, according to the BBC, and it is now only partially open to the public.
"It is still a public road, so it's classed as a byway, which means you can drive a road-legal vehicle on it, legally. But it's highly not recommended," said Kev Brown, who leads walking tours of the path through his company, Thames Estuary Man. "It's not a place to go if you don't know where you're going."
Today, aspiring walkers are advised to travel the Broomway with a local guide who understands the fickle tides and can offer safe passage across the sand and mud flats to the farmland on Foulness Island. The Broomway was once the only path by which services could reach the island.
Among its more regular travelers were postmen, who presumably did not rely on GPS.
Incident 1377: Seedance 2.0 Reportedly Generated Viral Tom Cruise–Brad Pitt Fight Video, Prompting Hollywood IP and Likeness Complaints
“Why an A.I. Video of Tom Cruise Battling Brad Pitt Spooked Hollywood”
It took only a 15-second clip of Tom Cruise and Brad Pitt duking it out on a crumbling rooftop at twilight to draw swift outrage, and sizable fear, from Hollywood over the last few days.
The widely circulated video was created by the Irish director Ruairi Robinson using Seedance 2.0, a powerful artificial intelligence video generation tool owned by the Chinese technology company ByteDance. It had plenty of the bells and whistles of a big-budget Hollywood film: sweeping camera angles, stunt choreography, crisp sound effects and haunting music.
With a two-sentence prompt and the click of a button, Seedance had produced a stunningly realistic result that was a drastic improvement over previously generated artificial intelligence videos, often shoddy clips known as A.I. slop. This video was so convincing that it drew near immediate condemnation from some of Hollywood's top organizations and companies.
Rhett Reese, a scriptwriter known for his "Deadpool" films, said in an interview that the Cruise-Pitt video had sent a "cold shiver" up his spine.
"For all of us who work in the industry and devoted our careers and lives to it, I just think it's nothing short of terrifying," he said. "I could just see it costing jobs all over the place."
ByteDance released Seedance 2.0 last week, nearly two months after a previous version had failed to prompt much anger. A news release from the company praised the updated tool's "physical accuracy, realism and controllability," which it said was suitable for the needs of "professional-grade creative scenarios."
"The creation process," the release went on, "is more natural and efficient, allowing users to control their creations like a true 'director.'"
Users promptly flocked to the platform to spin up their own content. An alternate ending to "Game of Thrones" went viral, as did a video of the notoriously beefing rappers Kendrick Lamar and Drake burying the hatchet on "The Tonight Show," and one of Samara Morgan, the vengeful girl in "The Ring" horror films, emerging from an old television set to pet a cat.
Robinson himself posted additional videos, including of Pitt and Cruise battling a robot, and of Pitt sparring with a sword-wielding "zombie ninja."
At the same time, Hollywood was swift to sit up straight. Charles Rivkin, the chairman and chief executive of the Motion Picture Association, called on ByteDance to "immediately cease its infringing activity," saying in a statement that Seedance 2.0 had engaged in the unauthorized use of copyrighted works on a "massive scale." Human Artistry Campaign, a global coalition that advocates using A.I. "with respect for the irreplaceable artists, performers and creatives," said on social media that unauthorized works generated by Seedance 2.0 violated the "most basic aspects of personal autonomy."
Disney, which in a watershed $1 billion deal last year agreed to allow OpenAI's Sora users to generate video content with its characters, sent a cease-and-desist letter to ByteDance, accusing it of supplying Seedance with a "pirated library" of Disney's characters --- "as if Disney's coveted intellectual property were free public-domain clip art."
ByteDance, which also owns TikTok and has been valued at $480 billion in the private markets, said in a statement that it respected intellectual property rights and was aware of the concerns about Seedance.
"We are taking steps to strengthen current safeguards as we work to prevent the unauthorized use of intellectual property and likeness by users," the statement said.
As last year's deal between Disney and OpenAI suggests, Hollywood has for years wrestled with how to manage the rapid growth of generative artificial intelligence. The concerns outlined by Reese echoed the Writers Guild strike in 2023, when for months thousands of union members demanded that studios institute guardrails protecting them from having their jobs or their intellectual property stolen by A.I. In the end, the group won guarantees that A.I. would not encroach on writers' credits and compensation.
Duncan Crabtree-Ireland, the national executive director and chief negotiator of SAG-AFTRA, which represents actors and media artists, said its contracts had specific and enforceable rules about digital replication. The kind of material represented by the Cruise-Pitt battle, he said, "could not be produced by any of the signatories to our contracts --- the studios, the streamers --- without the specific, informed consent of those individuals."
According to Crabtree-Ireland, the real concern is that, even if videos generated by Seedance and other A.I. platforms "are not malicious in intent," they could "really violate someone's right to control how their image, their likeness and their voice is used."
Not everyone is awed by Seedance's latest technology. Heather Anne Campbell, an executive producer and a writer on the animated series "Rick and Morty," said her social media accounts last week had been inundated with Seedance-generated clips of anime, sci-fi and unlikely superhero battles. But she is not yet worried, she said, about losing her job to the technology.
"Everybody is, I think, swept up by the circus that came to town and is showing off," she said. "I haven't seen anything good yet. Nothing that has taken my breath away, nothing that is poignant, nothing that is provocative even. It's all just garbage."
Campbell added that A.I. services like Seedance were at best "averaging machines," and argued that the greatest art was never made quickly or impersonally.
Still, some people working in Hollywood find it difficult to imagine that studios will not come to see A.I. as a cost-saving shortcut. "It would be cheaper to have A.I. write a screenplay than it would be for me to write a screenplay," Reese, the "Deadpool" writer, said. "I just know that in the back of my mind, that's where the terror comes from."
For Reese, a long-term answer to the unease that A.I. will reorder Hollywood could not come quickly enough.
"If I could wave my magic wand and make A.I. go away, at least in the creative field," he said, "I would absolutely wave the wand."It took only a 15-second clip of Tom Cruise and Brad Pitt duking it out on a crumbling rooftop at twilight to draw swift outrage, and sizable fear, from Hollywood over the last few days.
The widely circulated video was created by the Irish director Ruairi Robinson using Seedance 2.0, a powerful artificial intelligence video generation tool owned by the Chinese technology company ByteDance. It had plenty of the bells and whistles of a big-budget Hollywood film: sweeping camera angles, stunt choreography, crisp sound effects and haunting music.
With a two-sentence prompt and the click of a button, Seedance had produced a stunningly realistic result that was a drastic improvement over previously generated artificial intelligence videos, often shoddy clips known as A.I. slop. This video was so convincing that it drew near immediate condemnation from some of Hollywood's top organizations and companies.
Rhett Reese, a scriptwriter known for his "Deadpool" films, said in an interview that the Cruise-Pitt video had sent a "cold shiver" up his spine.
"For all of us who work in the industry and devoted our careers and lives to it, I just think it's nothing short of terrifying," he said. "I could just see it costing jobs all over the place."
ByteDance released Seedance 2.0 last week, nearly two months after a previous version had failed to prompt much anger. A news release from the company praised the updated tool's "physical accuracy, realism and controllability," which it said was suitable for the needs of "professional-grade creative scenarios."
"The creation process," the release went on, "is more natural and efficient, allowing users to control their creations like a true 'director.'"
Users promptly flocked to the platform to spin up their own content. An alternate ending to "Game of Thrones" went viral, as did a video of the notoriously beefing rappers Kendrick Lamar and Drake burying the hatchet on "The Tonight Show," and one of Samara Morgan, the vengeful girl in "The Ring" horror films, emerging from an old television set to pet a cat.
Robinson himself posted additional videos, including of Pitt and Cruise battling a robot, and of Pitt sparring with a sword-wielding "zombie ninja."
At the same time, Hollywood was swift to sit up straight. Charles Rivkin, the chairman and chief executive of the Motion Picture Association, called on ByteDance to "immediately cease its infringing activity," saying in a statement that Seedance 2.0 had engaged in the unauthorized use of copyrighted works on a "massive scale." Human Artistry Campaign, a global coalition that advocates using A.I. "with respect for the irreplaceable artists, performers and creatives," said on social media that unauthorized works generated by Seedance 2.0 violated the "most basic aspects of personal autonomy."
Disney, which in a watershed $1 billion deal last year agreed to allow OpenAI's Sora users to generate video content with its characters, sent a cease-and-desist letter to ByteDance, accusing it of supplying Seedance with a "pirated library" of Disney's characters --- "as if Disney's coveted intellectual property were free public-domain clip art."
ByteDance, which also owns TikTok and has been valued at $480 billion in the private markets, said in a statement that it respected intellectual property rights and was aware of the concerns about Seedance.
"We are taking steps to strengthen current safeguards as we work to prevent the unauthorized use of intellectual property and likeness by users," the statement said.
As last year's deal between Disney and OpenAI suggests, Hollywood has for years wrestled with how to manage the rapid growth of generative artificial intelligence. The concerns outlined by Reese echoed the Writers Guild strike in 2023, when for months thousands of union members demanded that studios institute guardrails protecting them from having their jobs or their intellectual property stolen by A.I. In the end, the group won guarantees that A.I. would not encroach on writers' credits and compensation.
Duncan Crabtree-Ireland, the national executive director and chief negotiator of SAG-AFTRA, which represents actors and media artists, said its contracts had specific and enforceable rules about digital replication. The kind of material represented by the Cruise-Pitt battle, he said, "could not be produced by any of the signatories to our contracts --- the studios, the streamers --- without the specific, informed consent of those individuals."
According to Crabtree-Ireland, the real concern is that, even if videos generated by Seedance and other A.I. platforms "are not malicious in intent," they could "really violate someone's right to control how their image, their likeness and their voice is used."
Not everyone is awed by Seedance's latest technology. Heather Anne Campbell, an executive producer and a writer on the animated series "Rick and Morty," said her social media accounts last week had been inundated with Seedance-generated clips of anime, sci-fi and unlikely superhero battles. But she is not yet worried, she said, about losing her job to the technology.
"Everybody is, I think, swept up by the circus that came to town and is showing off," she said. "I haven't seen anything good yet. Nothing that has taken my breath away, nothing that is poignant, nothing that is provocative even. It's all just garbage."
Campbell added that A.I. services like Seedance were at best "averaging machines," and argued that the greatest art was never made quickly or impersonally.
Still, some people working in Hollywood find it difficult to imagine that studios will not come to see A.I. as a cost-saving shortcut. "It would be cheaper to have A.I. write a screenplay than it would be for me to write a screenplay," Reese, the "Deadpool" writer, said. "I just know that in the back of my mind, that's where the terror comes from."
For Reese, a long-term answer to the unease that A.I. will reorder Hollywood could not come quickly enough.
"If I could wave my magic wand and make A.I. go away, at least in the creative field," he said, "I would absolutely wave the wand."
Incident 1378: Purportedly AI-Generated Video Allegedly Depicted Radnor High School Students Inappropriately, Prompting Police Investigation
“Radnor High School alerts families to 'inappropriate' AI video depicting students”
RADNOR, Pa. (WPVI) -- Radnor High School and Radnor Township Police are investigating an AI-generated video that allegedly depicts several students inappropriately, according to an email sent to families by Principal Dr. Joseph MacNamara.
"I am writing to address concerns and rumors regarding an AI-generated video that was reported to depict several of our students in an inappropriate manner," MacNamara wrote. "We understand how upsetting and serious this situation is, and we want to assure you that we are treating it with the highest level of urgency and care."
The principal said families of the students involved have been contacted and provided with support. The email also stated that the school immediately began an internal investigation and notified Radnor Township Police.
"RPD is actively involved, and we are continuing to work closely with them as we gather information," MacNamara wrote.
The school district declined to comment further, and Radnor Township Police said only that the incident is under active investigation.
Action News spoke with a mother of two children who attend Radnor High School, who asked to remain anonymous.
"The biggest concern is the psychological safety of our daughters that are going to school every day, that are now looking over their shoulder. They're wondering if they should post something on social media," she said.
Community members expressed concern about the broader implications of the technology.
"It's gonna be a recurring thing if we don't do anything to stop it. As far as where it's gonna be in a few years, who knows?" said Frank McHugh, who has a relative at Radnor High School.
"It's so gross, and especially for young women, it's so incredibly dangerous for this to be happening," said Meredith Criswell.
While major AI platforms such as ChatGPT and Gemini have safeguards to prevent the creation of sexually explicit images, Drexel University criminology professor and cybersecurity expert Dr. Robert D'Ovidio said unregulated apps remain widely available.
"And there's a big business surrounding this online," D'Ovidio said. "These rogue tools that allow criminals to engage them for a fee to do things like create sexualized images from what appears to be an innocent selfie, for example."
Lawmakers have scrambled to catch up to the rapidly advancing technology. Last year, Gov. Josh Shapiro signed a law outlawing deepfake pornography involving both minors and non-consenting adults.
"We know this is not an isolated incident, what happened in Radnor Township," D'Ovidio told Action News. "And parents need to recognize this. This is not something that's indicative of a community problem. Yes, it's happening here, but it's happening all over the country."
He urged parents to talk with their children about the risks.
"This is the time for discussion with their children because the capabilities are now in their kids' hands," D'Ovidio said. "You know, be careful who you let into your social circles on the various social media platforms that you're using. Make sure you can trust these individuals because, again, you're giving them access to photos to videos that they can easily weaponize against you."
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – November and December 2025 and January 2026
By Daniel Atherton
2026-02-02
Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...
The Database in Print
Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application. We kindly request your financial support with a donation.
Organization Founding Sponsor
Database Founding Sponsor

Sponsors and Grants






