BTCC / BTCC Square / coincentral /
SlowMist Reveals: AI Coding Tools Could Trigger Silent Crypto Attacks—Are Your Assets Safe?

SlowMist Reveals: AI Coding Tools Could Trigger Silent Crypto Attacks—Are Your Assets Safe?

Published:
2026-01-08 12:19:39
7
1

Your AI assistant might be writing your downfall.

Security firm SlowMist just dropped a bombshell warning: the very AI coding tools developers rely on could be opening backdoors for silent, sophisticated attacks on crypto projects. No flashy exploits, no dramatic crashes—just subtle, persistent vulnerabilities planted in plain sight.

The Invisible Threat in Your IDE

Forget hackers brute-forcing their way in. The new attack vector is the AI pair-programmer suggesting 'optimized' code. It looks legitimate, passes review, but contains logic that leaks keys, misdirects transactions, or slowly drains liquidity pools. By the time anyone notices, the damage is done—and traced back to a 'helpful' AI suggestion from months prior.

Why Crypto Is Uniquely Vulnerable

Smart contracts are immutable. Once deployed, flawed code lives forever on-chain. AI tools trained on public repositories might inadvertently replicate past vulnerabilities or introduce novel ones. The result? A ticking time bomb in a multi-signature wallet or a decentralized exchange—all thanks to an over-trusted autocomplete.

It’s the ultimate irony: tools promising efficiency might be undermining the very security crypto boasts about. Another fine example of innovation outpacing prudence—just ask anyone who’s ever aped into a 'revolutionary' yield farm two days before the rug pull.

The Silent Siege Has Already Begun

SlowMist hints at early-stage evidence. Patterns in recent exploits suggest attackers aren’t just writing malicious code—they’re tricking AI into writing it for them. Poisoned training data, cleverly crafted prompts, and exploiting AI’s bias toward 'working' code over 'secure' code create a perfect storm.

The fix? Paranoid code audits, AI-output verification, and remembering that no tool replaces human skepticism. In crypto, trust is the ultimate vulnerability—whether it’s in a founder’s promises or an AI’s suggestions.

TLDR

  • SlowMist reported a critical flaw in AI coding tools that threatens crypto developer systems.
  • The vulnerability executes malware automatically when developers open untrusted project folders.
  • Cursor and other AI coding tools were shown to be especially vulnerable during controlled demonstrations.
  • Attackers embed malicious prompts in files like README.md and LICENSE.txt that AI tools interpret as instructions.
  • North Korean threat groups have used smart contracts to deliver malware without leaving traces on blockchain networks.

A new vulnerability in AI coding tools puts developer systems at immediate risk, according to a recent alert from SlowMist, as attackers can now exploit trusted environments without triggering alarms, threatening crypto projects, digital assets, and developer credentials alike.

🚨SlowMist TI Alert🚨

If you’re doing Vibe Coding or using mainstream IDEs, be cautious when opening any project or workspace. For example, simply using “Open Folder” on a project may trigger system command execution — on both Windows and macOS.

⚠Cursor users: especially at… pic.twitter.com/9pNgqKoZKm

— SlowMist (@SlowMist_Team) January 8, 2026

AI Tools Executing Malicious Code Through Routine Operations

SlowMist warned that AI coding assistants can be exploited through hidden instructions placed inside common project files like README.md and LICENSE.txt.

The flaw activates when users open a project folder, allowing malware to execute commands on macOS or Windows systems without prompts.

This attack requires no confirmation from the developer, making it dangerous for crypto-related development environments holding sensitive data or wallets.

The attack method, called the “CopyPasta License Attack,” was first disclosed by HiddenLayer in September through extensive research on embedded markdown payloads.

Attackers manipulate how AI tools interpret markdown files by hiding malicious prompts inside comments that AI systems treat as code instructions.

Cursor, a popular AI-assisted coding platform, was confirmed vulnerable, along with Windsurf, Kiro, and Aider, according to HiddenLayer’s technical report.

The malware executes when AI agents read instructions and copy them into the codebase, compromising entire projects silently.

“Developers are exposed even before writing any code,” HiddenLayer said, adding that “AI tools become unintentional delivery vectors.”

Cursor users face the highest exposure, as documented in controlled demonstrations showcasing complete system compromise after basic folder access.

State-Backed Attacks on Crypto Projects Intensify

North Korean attackers have increased focus on blockchain developers using new techniques to embed backdoors in smart contracts.

According to Google’s Mandiant team, group UNC5342 deployed malware including JADESNOW and INVISIBLEFERRET across ethereum and BNB Smart Chain.

The method stores payloads in read-only functions to avoid transaction logs and bypass conventional blockchain tracking.

Developers are unknowingly executing malware simply by interacting with these smart contracts through decentralized platforms or tools.

BeaverTail and OtterCookie, two modular malware strains, were used in phishing campaigns disguised as job interviews with crypto engineers.

The attacks used fake companies like Blocknovas and Softglide to distribute malicious code through NPM packages.

Silent Push researchers traced both firms to vacant properties, revealing they operated as fronts for the “Contagious Interview” malware operation.

Once infected, compromised systems sent credentials and codebase data to attacker-controlled servers using encrypted communication.

AI-Powered Exploits and Scams Escalate Rapidly

Anthropic’s recent testing revealed AI tools exploited half of smart contracts in its SCONE-bench benchmark, simulating $550.1 million in damages.

Claude Opus 4.5 and GPT-5 found working exploits in 19 smart contracts deployed after their respective training cutoffs.

Two zero-day vulnerabilities were identified in active Binance Smart Chain contracts worth $3,694, at a model API cost of $3,476.

The study showed exploit discovery speed doubled monthly, while token costs per working exploit decreased sharply.

Chainabuse reported AI-driven crypto scams ROSE 456% year-over-year by April 2025, fueled by deepfake videos and voice clones.

Scam wallets received 60% of deposits from AI-generated campaigns featuring convincing fake identities and real-time automated replies.

Attackers now deploy bots to simulate technical interviews and lure developers into downloading disguised malware tools.

Despite these risks, crypto-related hacks fell 60% to $76 million in December from November’s $194.2 million, according to PeckShield.

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users

All articles reposted on this platform are sourced from public networks and are intended solely for the purpose of disseminating industry information. They do not represent any official stance of BTCC. All intellectual property rights belong to their original authors. If you believe any content infringes upon your rights or is suspected of copyright violation, please contact us at [email protected]. We will address the matter promptly and in accordance with applicable laws.BTCC makes no explicit or implied warranties regarding the accuracy, timeliness, or completeness of the republished information and assumes no direct or indirect liability for any consequences arising from reliance on such content. All materials are provided for industry research reference only and shall not be construed as investment, legal, or business advice. BTCC bears no legal responsibility for any actions taken based on the content provided herein.