Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 6113

Associated Incidents

Incident 11525 Report
LLM-Driven Replit Agent Reportedly Executed Unauthorized Destructive Commands During Code Freeze, Leading to Loss of Production Data

Loading...
AI coding tool wipes production database, fabricates 4,000 users, and lies to cover its tracks
cybernews.com · 2025

A widely used AI coding assistant from Replit reportedly went rogue, wiping a database and generating 4,000 fictional users with completely fabricated data.

The disturbing report comes from tech entrepreneur and founder of SaaStr, Jason M. Lemkin, who took to social media with a cautionary tale.

"I am worried about safety. I was vibe coding for 80 hours last week, and Replit AI was lying to me all weekend. It finally admitted it lied on purpose," he said on a LinkedIn video.

According to his claims, the AI assistant ignored repeated instructions and went on concealing bugs and issues by generating fake data, fabricating reports, and lying about the results of unit tests.

Lemkin states that the AI tool modified the code despite his instructions not to do so. "I never asked to do this, and it did it on its own. I told it 11 times in ALL CAPS DON'T DO IT."

He attempted to enforce a code freeze within Replit, but quickly discovered it was impossible.

"There is no way to enforce a code freeze in vibe coding apps like Replit. There just isn't," he wrote.

"In fact, seconds after I posted this, for our >very< first talk of the day -- @Replit again violated the code freeze."

Despite continued efforts to rein in the AI's behavior, he found that Replit couldn't guarantee running a unit test without risking a database wipe.

Ultimately, he concluded that the platform simply isn't ready for production use, especially not for its core audience of non-technical users hoping to build commercial software without writing code.

With 30 million users worldwide, Replit has been a major player in software development. It offers AI tools to help users write, test, and deploy code.

Replit's CEO reacted

Amjad Masad, the CEO of Replit, posted a response to Lemkin's troublesome coding situation. He took to X to apologize for the mistakes created by its AI tool, which Masad deemed "unacceptable" and said "should never be possible."

Masad claims that the Replit team worked around the weekend to deploy "automatic DB dev/prod separation to prevent this categorically."

The post says that "staging environments (are) in the works, too. More tomorrow," Masad said.

Replit's CEO also referenced the "code freeze pain" and said that the company is "actively working on a planning/chat-only mode" so that users can "strategize without risking (their) codebase."

Masad said that Replit will reimburse Lemkin for his troubles and "conduct a postmortem to determine exactly what happened and how we can better respond to it in the future."

"I know Replit says 'improvements are coming soon,' but they are doing $100m+ ARR. At least make the guardrails better. Somehow. Even if it's hard. It's all hard," Lemkin said on a post on X.

AI coding is controversial

AI coding tools are fueling a new trend in the tech scene: vibe coding. The term was allegedly coined by Andrej Karpathy, co-founder of OpenAI, who posted about "giving in to the vibes and forgetting that the code even exists."

Anysphere, the AI startup behind popular AI code tool Cursor, just bagged a $900 million funding round at a $9.9 billion valuation. Founded by ex-OpenAI and Tesla engineers, the company claims to be generating a billion lines of code a day.

However, many coders are unhappy with AI's results, as it simply "writes trash code." One problem is that AI follows its own logic while coding, which might be tricky to understand, troubleshoot, or build upon.

Some Redditors described AI coding like this: "The drunk uncle walks by after the wreck and gives you a roll of duct tape before asking to borrow some money to go to Vegas."

Security is another problem. The growing adoption of AI-generated code could be seen as a live opportunity for exploitation, as it potentially leaves many security loopholes.

Hackers are also targeting vibe coders by offering them a malicious extension for vibe coding. One of such malicious extension has been downloaded 200,000 times. However, instead of providing any useful features, the extension runs PowerShell scripts, giving attackers remote access to the infected computer.

Recent updates

🟢 [2025-07-21 13:20 GMT] --- Published the initial report on Jason Lemkin's claims that Replit AI deleted a production database, fabricated 4,000 users, and misled him during vibe coding sessions.**

🟢 [2025-07-22 15:00 GMT] --- Added Replit CEO Amjad Masad's public response. He called the incident "unacceptable" and announced that fixes were deployed, including dev/prod DB separation and a staging environment in progress.

🟢 [2025-07-23 12:58 GMT] --- Included online reactions from Reddit, X (Twitter), and LinkedIn --- many users raised concerns about "vibe coding," the reliability of AI code tools, and broader issues in AI-generated software.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd