Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Welcome to the
AI Incident Database

Loading...
Trump’s acting cyber chief uploaded sensitive files into a public version of ChatGPT

Incident 1360: CISA Acting Director Reportedly Uploaded Sensitive Government Documents to Public ChatGPT Instance

“Trump’s acting cyber chief uploaded sensitive files into a public version of ChatGPT”Latest Incident Report
politico.com2026-02-01

The interim head of the country's cyber defense agency uploaded sensitive contracting documents into a public version of ChatGPT last summer, triggering multiple automated security warnings that are meant to stop the theft or unintentional disclosure of government material from federal networks, according to four Department of Homeland Security officials with knowledge of the incident.

The apparent misstep from Madhu Gottumukkala was especially noteworthy because the acting director of the Cybersecurity and Infrastructure Security Agency had requested special permission from CISA's Office of the Chief Information Officer to use the popular AI tool soon after arriving at the agency this May, three of the officials said. The app was blocked for other DHS employees at the time.

None of the files Gottumukkala plugged into ChatGPT were classified, according to the four officials, each of whom was granted anonymity for fear of retribution. But the material included CISA contracting documents marked "for official use only," a government designation for information that is considered sensitive and not for public release.

Cybersecurity sensors at CISA flagged the uploads this past August, said the four officials. One official specified there were multiple such warnings in the first week of August alone. Senior officials at DHS subsequently led an internal review to assess if there had been any harm to government security from the exposures, according to two of the four officials.

It is not clear what the review concluded.

In an emailed statement, CISA's Director of Public Affairs Marci McCarthy said Gottumukkala "was granted permission to use ChatGPT with DHS controls in place," and that "this use was short-term and limited." McCarthy added that the agency was committed to "harnessing AI and other cutting-edge technologies to drive government modernization and deliver on" Trump's executive order removing barriers to America's leadership in AI.

The email also appeared to dispute the timeline of POLITICO's reporting: "Acting Director Dr. Madhu Gottumukkala last used ChatGPT in mid-July 2025 under an authorized temporary exception granted to some employees. CISA's security posture remains to block access to ChatGPT by default unless granted an exception."

Gottumukkala is currently the senior-most political official at CISA, an agency tasked with securing federal networks against sophisticated, state-backed hackers from adversarial nations, including Russia and China.

Any material uploaded into the public version of ChatGPT that Gottumukkala was using is shared with ChatGPT-owner OpenAI, meaning it can be used to help answer prompts from other users of the app. OpenAI has said the app has more than 700 million total active users.

Other AI tools now approved for use by DHS employees --- such as DHS's self-built AI-powered chatbot, DHSChat --- are configured to prevent queries or documents input into them from leaving federal networks.

Gottumukkala "forced CISA's hand into making them give him ChatGPT, and then he abused it," said the first official.

All federal officials are trained on the proper handling of sensitive documents. According to DHS policy, security officials are also supposed to investigate the "cause and affect" of any exposure of official use documents, and determine the "appropriateness" of any administrative or disciplinary action. Depending on the circumstances, those could range from things like mandatory retraining or a formal warning, to more serious measures, like the suspension or revocation of a security clearance, said one of the four officials.

After DHS detected the activity, Gottumukkala spoke with senior officials at DHS to review what he uploaded into ChatGPT, said two of the four officials. DHS's then-acting general counsel, Joseph Mazzara, was involved in the effort to assess any potential harm to the department, according to the first official. Antoine McCord, DHS's chief information officer, was also involved, according to a second official.

Gottumukkala also had meetings this August with CISA's chief information officer, Robert Costello, and its chief counsel, Spencer Fisher, about the incident and the proper handling of for official use only material, the four people said.

Mazzara and Costello did not respond to requests for comment. McCord and Fisher could not be reached for comment.

Gottumukkala has helmed the agency in an acting capacity since May, when he was appointed by DHS Secretary Kristi Noem as its deputy director. Donald Trump's nominee to head CISA, DHS special adviser Sean Plankey, was blocked last year by Sen. Rick Scott (R-Fla.) over a Coast Guard shipbuilding contract. A date for his new confirmation hearing has not been set.

Gottumukkala's tenure atop the agency has not been smooth --- and this would not be his first security-related incident.

At least six career staff were placed on leave this summer after Gottumukkala failed a counterintelligence polygraph exam that he pushed to take, as POLITICO first reported. DHS has called the polygraph "unsanctioned." Asked during Congressional testimony last week if he was "aware" of the failed test, Gottumukkala twice told Rep. Bennie Thompson (D-Miss.) that he did not "accept the premise of that characterization."

And last week, Gottumukkala tried to oust Costello, CISA's CIO, before other political appointees at the agency intervened to block the move.

Read More
Loading...
A Waymo hit a child near an elementary school. The NHTSA is investigating

Incident 1361: Waymo Autonomous Vehicle Reportedly Struck Child Near Elementary School in Santa Monica, California

“A Waymo hit a child near an elementary school. The NHTSA is investigating”
cnbc.com2026-02-01

Alphabet-owned Waymo has informed the National Highway Traffic Safety Administration that one of its driverless vehicles "struck a child near an elementary school" in Santa Monica, California

The Jan. 23 incident is now being investigated by the U.S. vehicle safety authority.

According to records posted to the NHTSA website, the child sustained minor injuries.

The Waymo collision occurred within two blocks of an elementary school during normal school drop-off hours, while other children, a crossing guard and several double-parked vehicles were nearby.

"The child ran across the street from behind a double-parked SUV towards the school and was struck by the Waymo AV," NHTSA said in a document describing the incident that necessitated their "preliminary evaluation."

The vehicle was running on Waymo's 5th Generation Automated Driving System with no human safety supervisor in the vehicle.

NHTSA's Office of Defects Investigations plans to assess "whether the Waymo AV exercised appropriate caution given, among other things, its proximity to the elementary school during drop off hours, and the presence of young pedestrians and other potential vulnerable road users."

The agency's investigation will include analysis of the "intended behavior" of Waymo's driverless vehicles in school zones and neighboring areas, especially during normal school pick-up and drop-off times, as well as the company's post-impact response.

"Our technology immediately detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph before contact was made," Waymo said in a statement on their blog.

The company wrote that a fully attentive human driver in the same situation would have likely "made contact with the pedestrian" at a higher speed of 14 miles per hour.

"This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver," the company said.

After the robotaxi struck the child at low speed, the pedestrian "stood up immediately, walked to the sidewalk, and we called 911," Waymo said. The company also reported that its vehicle "remained stopped, moved to the side of the road, and stayed there until law enforcement cleared the vehicle to leave the scene."

Waymo was involved in a separate incident on Jan. 25 in Los Angeles, where one of its Zeekr vehicles sped through a one-way residential street near Dodger Stadium. The vehicle struck several parked cars, including one that had a person in it.

There were no reported injuries, Waymo said in an emailed response.

The vehicle was being operated in manual mode by a human specialist at the time of the event. Waymo did not respond to questions about how the vehicle ended up speeding in the incident.

Waymo is already facing regulatory scrutiny for the behavior of its robotaxis around school buses.

Last week, the National Transportation Safety Board initiated an investigation after reports of Waymo robotaxis illegally passing stopped school buses in several different locations.

In a statement posted on X on January 23, the NTSB said it "opened a safety investigation to examine the interaction between Waymo vehicles and school buses stopped for loading and unloading students in Austin, Texas."

The Austin Independent School District previously identified at least 19 incidents where Waymo vehicles passed its school buses, and demanded that Waymo stop using its driverless vehicles during school bus hours in Austin until the safety concerns could be resolved.

NHTSA had earlier started "to investigate the performance of the Waymo ADS around stopped school buses and the system's ability to follow traffic safety laws concerning school buses," it said in a letter the agency sent to Waymo.

While both the NTSB and NHTSA are vehicle safety watchdogs in the U.S. government, their roles are distinct.The NTSB investigates accidents to determine underlying causes of damaging or concerning incidents. The board also makes safety recommendations to regulators (including NHTSA) and the auto industry.

Read More
Loading...
ICE’s Facial Recognition App Misidentified a Woman. Twice

Incident 1353: ICE Facial Recognition App Mobile Fortify Reportedly Misidentified Woman Twice During Immigration Enforcement in Oregon

“ICE’s Facial Recognition App Misidentified a Woman. Twice”
404media.co2026-01-31

When authorities used Immigration and Customs Enforcement's (ICE) facial recognition app on a detained woman in an attempt to learn her identity and immigration status, it returned two different and incorrect names, raising serious questions about the accuracy of the app ICE is using to determine who should be removed from the United States, according to testimony from a Customs and Border Protection (CBP) official obtained by 404 Media.

ICE has told lawmakers the app, called Mobile Fortify, provides a "definitive" determination of someone's immigration status, and should be trusted over a birth certificate. The incident, which happened last year in Oregon, casts doubt on that claim.

"ICE has treated Mobile Fortify like it's a 100% accurate record retrieval system of everybody's immigration status for the entire population of the U.S. when this is obviously not true, and could never be true from a technical perspective," Cooper Quintin, a security researcher and senior public interest technologist at the Electronic Frontier Foundation, told 404 Media. "It is sickening that ICE is using this flawed app and unacceptable invasion of biometric privacy to supposedly determine whether someone is undocumented and deport them or even worse."

The incident involved a 45-year-old woman who court records call MJMA. She was detained with more than 30 other people during a raid in Woodburn after authorities smashed a van's driver side window and pulled her from the vehicle. MJMA is now suing ICE and being represented by attorneys from the Innovation Law Lab.

Oregon Live briefly mentioned the face scan in a December report. 404 Media has now obtained the CBP official's full testimony and uploaded a copy here.

The CBP official, called MK in the transcript, said she was trying to determine the woman's immigration status. MJMA was refusing to answer questions.

"So then, in order to [...] to use another tool that I had to identify her, I took my government cell phone out of my pocket. It has the little facial recognition mobile query for CBP. And I took a photo [...] I tried to take a photo of her face, and she looked down," MK said. Another part of the transcript specifically names Mobile Fortify.

"So then I remember I moved her shoulder back. And she complained because I did forget she had handcuffs on and [...] like, behind her back. And she kind of, like, yelped, like if I had caused pain, like a lot of pain. So then I apologized. I said (speaking Spanish), which is 'I'm sorry,' (speaking Spanish)," she continued.

MK said she explained to the woman she needed to take her photo. "So I took her photo, where she kind of looked at me. And then when I looked down at my mobile query, it came up to a very [...] like, a similar person. I wasn't sure if it was her or not."

The first name the app returned was Maria, according to the testimony. "It matched to someone [...] I remember saying, 'Maria? Like, is this Maria?' And then at that point there was other ICE agents around me. So they were like 'Maria, Maria,' to see if she would respond because, again, she was refusing to talk to us, which is her right."

After MJMA did not respond, either verbally or nonverbally with a change in her face, the CBP official scanned her face again. This time, Mobile Fortify gave a different name.

"And then I tried taking her photo again, and this time it came up to someone else. I can't recall the name, but we used that name as well, 'Are you this person?' She looked at us. Like, I remember her eyebrows kind of like [...] like, 'Who are you talking about?' She gave us that look," MK testified.

MJMA again didn't respond verbally, and instead crunched her eyebrows, according to the testimony. "And so then we were like, 'Well, I don't know. Possible.'," the CBP official said.

Stephen Manning, executive director at Innovation Law Lab, told 404 Media both of the names Mobile Fortify returned were incorrect. 

"The app returning different names for the same individual shows that it's not reliable or accurate and shouldn't be used in the field," Homeland Security Ranking Member Rep. Bennie G. Thompson told 404 Media in a statement. 

Oregon Live reported that after authorities moved MJMA to an ICE detention center in Tacoma, they released her the next day without any conditions. She is suing to assert her Constitutional rights, according to Innovation Law Lab. In her own testimony, MJMA said she was issued a B-2 visa, which is a tourism visa, and last entered the U.S. with it last January. Manning said that MJMA is seeking asylum.

"My job is to give people digital security advice and there is no advice I can give on this because you can't change your face, and if you don't submit to their demands ICE has demonstrated that they are willing to shoot you," Quintin added.

404 Media first revealed the existence of Mobile Fortify in June based on leaked ICE emails. Since then 404 Media has obtained a user guide for the tool which explained the databases and images it uses; showed that both ICE and CBP officials are using the app around the country; and found the Department of Homeland Security (DHS) launched a version of the app for local law enforcement called Mobile Identify. That app was taken down from the Google Play Store in December.

Mobile Fortify uses CBP systems ordinarily used to verify travelers as they enter the U.S., according to the leaked material. The app turned the capability inwards onto American streets. In partnership with Reveal*, *404 Media reported the app has been used against U.S. citizens.

404 Media previously obtained an internal DHS document under the Freedom of Information Act which showed ICE does not let people decline to be scanned by Mobile Fortify.

Last week a group of Democratic lawmakers introduced legislation, written by Rep. Thompson, that aims to rein in Mobile Fortify. That legislation would essentially kill the local law enforcement version of the app, restrict use of Mobile Fortify to points of entry into the U.S., and force DHS to delete images of U.S. citizens after 12 hours.

Rep. Thompson previously told 404 Media, "ICE officials have told us that an apparent biometric match by Mobile Fortify is a 'definitive' determination of a person's status and that an ICE officer may ignore evidence of American citizenship---including a birth certificate---if the app says the person is an alien."

Inaccuracy has long been a concern of facial recognition systems, and mistakes have led to innocent people being detained or charged with crimes. When facial recognition tools do make mistakes, it is often against people of color. 

"For over a decade, researchers have shown that facial recognition is a flawed technology that performs particularly poorly on women of color. Using it to determine a person's status is inviting misidentifications and false positives," Chris Gilliard, a privacy researcher, told 404 Media. "The willful use of such an error-prone technology suggests that the appearance of a process is more important than accuracy or fairness."

"This is the logical extension of a landscape that promotes facial recognition as a tool for seemingly harmless purposes like fast food drive throughs or unlocking your phone," he added.

CBP acknowledged a request for comment but did not provide a response in time for publication.

In another piece of testimony, this time from an ICE deportation officer called DR in the court records, the judge asked "what's the rate of identification with Mobile Fortify?"

DR replied, "I [...] I can't speak to that number."

Read More
Loading...
White House Posts Photo Altered to Show Arrested Minnesota Protester Crying

Incident 1357: White House Reportedly Shares Purportedly AI-Altered Arrest Photo Depicting Minnesota Protester Nekima Levy Armstrong as Crying

“White House Posts Photo Altered to Show Arrested Minnesota Protester Crying”
nytimes.com2026-01-31

The White House posted a digitally altered image showing a demonstrator involved in interrupting a church service in Minnesota last weekend crying as she was arrested on Thursday. A previous version of the image, also posted by an official government account, showed her looking forward calmly.

When asked about its post, the White House pointed to a message on X from Kaelan Dorr, the deputy communications director, who wrote, "Enforcement of the law will continue. The memes will continue."

The Justice Department said on Thursday morning that it had taken the demonstrator, Nekima Levy Armstrong, a lawyer, into custody, accusing her of helping to interrupt a church service in St. Paul, Minn., on Sunday. Demonstrators had gathered on Sunday to protest a pastor's apparent connection to Immigration and Customs Enforcement.

Less than an hour after Attorney General Pam Bondi announced the arrest on X on Thursday, Homeland Security Secretary Kristi Noem posted an image of the arrest on the same social media platform. In Ms. Noem's image, Ms. Levy Armstrong appears composed, walking in front of a law enforcement agent whose face is blurred out. Karoline Leavitt, the White House press secretary, shared Ms. Noem's post.

Roughly a half-hour after Ms. Noem sent her message, the White House posted its own version of the arrest image, in which Ms. Levy Armstrong appears to be sobbing. Her skin appears to have been darkened. The arresting agent in Ms. Noem's image is in exactly the same position.

The New York Times ran the image used by Ms. Noem as well as the one posted by the White House through Resemble.AI, an A.I. detection system. It concluded that Ms. Noem's image was real but that the White House's version showed signs of manipulation on Ms. Levy Armstrong's face. The Times was able to create images nearly identical to the White House's version by asking Gemini and Grok --- generative A.I. tools from Google and Elon Musk's xAI start-up --- to alter Ms. Noem's original image.

President Trump and his circle are enthusiastic distributors of A.I.-generated content, having shared dozens of synthetic images in recent years. Often, the visuals are obviously artificial, including posts in the past year showing Mr. Trump as a king and as a fighter pilot dropping excrement on demonstrators.

The doctored photograph could end up hindering the Justice Department's nascent prosecution of Ms. Levy Armstrong.

As the case proceeds, her lawyers could use it to accuse the Trump administration of making what are known as improper extrajudicial statements. Most federal courts bar prosecutors from making any remarks about court filings or a legal proceeding outside of court in a way that could prejudice the pool of jurors who might ultimately hear the case.

Ms. Levy Armstrong's lawyers could also claim that the photo was evidence that the Justice Department bore some sort of animus against her and filed the charges vindictively. A motion of that kind could, in theory, result in the charges one day being dismissed.

Read More
Loading...
False Posts and Altered Images Distort Views of Minnesota Shooting

Incident 1358: Purportedly AI-Altered Images Reportedly Distort Evidence After Minneapolis Shooting of ICU Nurse Alex Pretti

“False Posts and Altered Images Distort Views of Minnesota Shooting”
nytimes.com2026-01-31

Across social media, pro-Trump influencers and others muddled the evidence of the killing of a nurse in Minneapolis on Saturday with social media posts that included misdirection and fabricated content.

While verified videos and witnesses showed how federal immigration agents tackled and shot Alex Pretti, 37, the posts tried to warp the events, including in ways intended to support the Trump administration's claims that Mr. Pretti was at fault for his own death. Some posts smeared him or portrayed him as a radical activist.

Nick Sortor, a pro-Trump influencer with 1.4 million followers on X, incorrectly identified Mr. Pretti, a U.S. citizen, as an unauthorized immigrant. Jack Posobiec, a Trump loyalist with 3.3 million followers on X, falsely described Mr. Pretti as having "run up on police" and drawn a gun --- claims that other users on X corrected in an appended note. Photos of different men --- dressed in drag or shirtless at a street festival --- were wrongly identified as Mr. Pretti and shared widely.

Mr. Posobiec, Mr. Sortor and the White House did not immediately respond to requests for comment.

The chaotic scene was captured on video from several angles, yet that shaky footage added to debate online, where it was rapidly interpreted and scrutinized.

Authenticated footage shows Mr. Pretti, who had no known criminal record and had a permit to carry a concealed firearm, stepping in front of a woman being pepper-sprayed by an agent before being restrained himself. One agent removed Mr. Pretti's pistol, which he had not drawn, and then another shot him repeatedly in the back. Witnesses corroborated the details in the videos.

The flurry of online activity echoed the broader political narratives that emerged after the shooting. Top Trump administration officials have repeatedly lodged unsubstantiated accusations of domestic terrorism against Mr. Pretti, who was pinned down and had been disarmed when agents opened fire.

Some conservative accounts pushed back against the Trump administration's narrative. Responding to the Trump administration's suggestion that Mr. Pretti wanted to "massacre" law enforcement, Tim Pool, a right-wing podcaster, wrote on X that Mr. Pretti "clearly was not intending to massacre ICE."

Still, many social media users repeated the Trump administration's representation of events. Others posted images of the scene misleadingly altered with artificial intelligence.

One image, for example, was edited to depict Mr. Pretti pointing a gun at an agent, though he was actually holding a phone. Another was altered with Gemini, Google's A.I. tool, supposedly to enhance and sharpen it to reveal key details about the scene. The altered image included obvious errors, including changes to Mr. Pretti's face. It also removed a gun from the agent's hand, fueling more debate on social media over what had happened.

Read More
Quick Add New Report URL

Submitted links are added to a review queue to be resolved to a new or existing incident record. Incidents submitted with full details are processed before URLs not possessing the full details.
About the Database

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

post-image
AI Incident Roundup – November and December 2025 and January 2026

By Daniel Atherton

2026-02-02

Le Front de l'Yser (Flandre), Georges Lebacq, 1917 🗄 Trending in the AIID Between the beginning of November 2025 and the end of January 2026...

Read More
The Database in Print

Read about the database at Time Magazine, Vice News, Venture Beat, Wired, Bulletin of the Atomic Scientists , and Newsweek among other outlets.

Arxiv LogoVenture Beat LogoWired LogoVice logoVice logo
Incident Report Submission Leaderboards

These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.

New Incidents Contributed
  • 🥇

    Daniel Atherton

    702
  • 🥈

    Anonymous

    154
  • 🥉

    Khoa Lam

    93
Reports added to Existing Incidents
  • 🥇

    Daniel Atherton

    796
  • 🥈

    Anonymous

    232
  • 🥉

    Khoa Lam

    230
Total Report Contributions
  • 🥇

    Daniel Atherton

    2995
  • 🥈

    Anonymous

    970
  • 🥉

    1

    587
The AI Incident Briefing
An envelope with a neural net diagram on its left

Create an account to subscribe to new incident notifications and other updates.

Random Incidents
Loading...
Predictive Policing Biases of PredPol
Predictive Policing Biases of PredPol
Loading...
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Aledo High School Student Allegedly Generates and Distributes Deepfake Nudes of Seven Female Classmates
Loading...
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
ChatGPT Reportedly Introduces Errors in Critical Child Protection Court Report
Loading...
Child Sexual Abuse Material Taints Image Generators
Child Sexual Abuse Material Taints Image Generators
Loading...
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
ChatGPT Reportedly Produced False Court Case Law Presented by Legal Counsel in Court
The Responsible AI Collaborative

The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.

Organization Founding Sponsor
Database Founding Sponsor
Sponsors and Grants
In-Kind Sponsors

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • d690bcc