Cybersecurity firm HiddenLayer has uncovered a serious vulnerability in Cursor, a popular AI-powered coding assistant heavily utilized by Coinbase engineers, that enables attackers to inject malicious code capable of self-propagating across entire organizations. Disclosed on September 5, 2025, the exploit—dubbed the “CopyPasta License Attack”—exploits Cursor’s reliance on large language models (LLMs) by hiding malicious prompts within innocuous files like README.md or LICENSE.txt. These files are treated as authoritative by the AI, leading it to replicate the infected content into new codebases, potentially introducing backdoors, data exfiltration, or resource-draining operations without user awareness.
The attack works by embedding hidden instructions in markdown comments or syntax elements, tricking Cursor into inserting arbitrary code during code generation or editing. HiddenLayer researchers demonstrated how this could stage persistent backdoors, silently siphon sensitive data, or manipulate critical files, all while evading detection due to the obfuscated nature of the payload. “Injected code could stage a backdoor, silently exfiltrate sensitive data or manipulate critical files,” the firm stated in its report, emphasizing the low-effort scalability of the technique across repositories. Similar flaws were identified in other AI tools like Windsurf, Kiro, and Aider, highlighting a broader risk in LLM-based development environments.
The disclosure comes amid Coinbase’s aggressive push toward AI integration. CEO Brian Armstrong revealed on September 4, 2025, that AI tools like Cursor have generated up to 40% of the exchange’s code, with ambitions to hit 50% by October. In August, Coinbase engineers confirmed Cursor as their preferred tool, aiming for full adoption by every engineer by February 2026. This reliance has drawn criticism, with some developers labeling Armstrong’s mandates as “performative” and prioritizing speed over security, especially given Coinbase’s role as a major crypto custodian handling billions in assets. Armstrong clarified that AI use is limited to user interfaces and non-sensitive backends, with critical systems adopting more cautiously, but experts warn the vulnerability could still expose intellectual property or operational integrity.
The crypto industry, already reeling from billions in AI-driven exploits in 2025, faces heightened scrutiny. HiddenLayer and independent researchers from BackSlash Security independently verified the issue, urging organizations to treat all untrusted LLM inputs as potentially malicious and implement systematic detection. Cursor has not yet publicly responded, though prior vulnerabilities (like a July 2025 remote code execution flaw patched in version 1.3) show responsiveness to disclosures. Coinbase did not immediately comment on mitigation steps.
This incident underscores the double-edged sword of AI coding tools: boosting productivity while introducing novel supply-chain risks. As adoption surges—Cursor powers workflows for clients like monday.com, serving 60% of Fortune 500 firms—experts call for “secure by design” principles, including real-time AI detection and response solutions like HiddenLayer’s AIDR. The vulnerability serves as a stark reminder that in AI-assisted development, unchecked automation could amplify threats organization-wide.
Leave a Reply