Google warns of surge in fake AI tools used to spread malware
Cybercriminals are exploiting the popularity of AI with fake tools that steal sensitive user data through malicious ads.

Google has warned of an alarming surge in fake artificial intelligence tools advertised across social media platforms that install malware onto victims' devices.
Threat actors are preying on growing public interest in AI with websites that pose as video generation tools but deliver information-stealing software, according to a blog post from the tech giant.
"As AI's popularity surged over the past couple of years, cybercriminals quickly moved to exploit the widespread excitement," Google said. "Their actions have fuelled a massive and rapidly expanding campaign centred on fraudulent websites masquerading as cutting-edge AI tools."
The malware, dubbed UNC6032 or Noodlophile Stealer, has been active since mid-2024 and has targeted users across geographies and sectors, aiming to steal credentials, cookies, credit card data, and crypto wallet information.
Google’s Mandiant unit analyzed more than 120 malicious ads, viewed by over 2.3 million users in EU countries alone, primarily through Facebook and, to a lesser extent, LinkedIn.
The fake tools exploit a broader and growing cyber risk tied to the global surge in AI adoption. By mimicking the interfaces of legitimate platforms offering text-to-video or image-to-video functionality, these malicious sites prompt users to download files embedded with STARKVEIL malware.
Once installed, it can log keystrokes, execute remote commands, take screenshots and propagate through USB drives.
The websites, including one impersonating the real text-to-video service Luma AI, follow a multi-step flow to build trust before delivering the malicious payload. Clicking "generate" initiates a phoney loading screen, followed by a download button leading to an infected ZIP archive.
STARKVEIL drops multiple modular malware families, including FROSTRIFT, a .NET backdoor that scans for crypto wallets and popular password manager browser extensions.
Cybersecurity firm Morphisec, which separately identified the payload earlier this month, described it as previously undocumented malware designed for credential theft, wallet exfiltration, and optional remote access deployment.
"What makes this campaign unique is its exploitation of AI as a social engineering lure—turning an emerging legitimate trend into an infection vector," Morphisec said.
"Unlike older malware campaigns disguised as pirated software or game cheats, this operation targets a newer, more trusting audience: creators and small businesses exploring AI for productivity."
Google urged users to verify domain legitimacy and exercise caution when engaging with AI tools, noting that interest in generative AI has broadened the potential victim pool far beyond tech enthusiasts.
Latest Investigations
Last year, I wrote an article about a shady forex rankings site that gives negative reviews to forex brokers who refuse to pay for their services. Then the article, and many others critical of the company, disappeared from Google Search.
In many ways, Xone looks little different from the many, many blockchain testnets out there. But its backed by a notorious Cambodian company aiding money launderers and linked to the country's ruling family.
Got a tip or story? Contact callan@scamurai.io.