Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 5243

Associated Incidents

Incident 10808 Report
Noodlophile Stealer Reportedly Distributed Through Allegedly Fraudulent AI Content Platforms

Loading...
Attackers Lace Fake Generative AI Tools With 'Noodlophile' Malware
darkreading.com · 2025

An attacker is offering supposed generative AI tools to users in Facebook groups, only to give them malware once they upload their media to the fraudulent "tool."

Security vendor Morphisec detailed a campaign on May 8 in which threat actors are advertising "AI-themed platforms" on social media sites like Facebook, offering to generate AI images, videos, websites, logos, and more.

However, once the user uploads something like a reference image, the fake website "processes" the material and instructs the user to download a finished product. That finished product, at least in terms of this campaign, is malware. In a research blog post, Morphisec researcher Shmuel Uzan detailed variants of "Noodlophile Stealer," a Swiss Army knife of malware that "combines browser credential theft, wallet exfiltration, and optional remote access deployment."

In a time of heavy LLM hype, a malware campaign such as this could catch not only individuals in its crosshairs, but also small businesses looking to access free marketing tools.

How the Campaign Works

Threat actors create Facebook groups claiming to offer generative AI tools to users, in particular posing as Luma AI's Dream Machine product, an otherwise legitimate LLM tool.

These groups include links to fraudulent, believable-looking websites promising to turn files and prompts into various media and marketing materials. There are numerous groups, each with thousands of followers, Uzan explained. "A simple search through social media platforms often leads to additional large-scale groups, creating a network that amplifies the reach and visibility of these fake tools."

As noted, at the end of processing, the website tells the user to download the completed file. This is where the malware comes in.

"At the final stage, users are instructed to download their 'processed' content. In reality, they unknowingly download a malicious file," according to the blog post. "This file installs malware --- such as Noodlophile or Noodlophile bundled with XWorm --- onto their systems, enabling attackers to steal data, harvest credentials, and potentially gain remote access to infected devices."

Noodlophile and accompanying worms are installed as the last part of an attack chain that begins with a series of components downloaded through these fake websites, such as a .Net loader and instructions for a persistence script. Noodlophile steals browser credentials, cookies, and cryptocurrency before sending them to a Telegram bot. Morphisec's research includes indicators of compromise.

Noodlophile in itself refers to the malware as well as the malware developer, which Uzan noted was likely Vietnamese in origin. The malware is offered as part of malware-as-a-service schemes.

Defender Takeaways

A campaign like this is tricky, as economic headwinds combined with tight budgets can create situations where a small- and medium-sized business employee can access a website, seeing an opportunity to create low- or no-cost marketing materials, only to then get compromised because they weren't trained to avoid schemes like this.

Morphisec CTO Michael Gorelik tells Dark Reading that the primary targets for this campaign are freelancers, AI enthusiasts, SMBs, and general users, adding that infections were seen in midsized businesses, including at least one situation where a "stealer payload was blocked before full execution."

Gorelik says defenders should avoid free or unverified AI platforms that aren't known to come from reputable sources, and have "strict separation between business and personal activities." He also recommends they be cautious of downloaded archive files such as .zip and .rar files, and, of course, educate users to avoid phishing attempts and risky behavior.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd