A Bug in the Cursor AI Code Editor Allows Execution of Silent Code using Malevolent Repositories

0
Craw Security: Leading Institute for Linux and Cybersecurity Training

A Bug in the Cursor AI Code Editor Allows Execution of Silent Code using Malevolent Repositories

The artificial intelligence (AI)-powered code editor Cursor has been found to have a security flaw that could cause code execution when a maliciously constructed repository is opened with the application.

Because an out-of-the-box security setting is by default disabled, attackers can run arbitrary code on users’ computers with their privileges, which is the root of the problem.

“VS Code-style tasks are defined with runOptions because Cursor comes with Workspace Trust deactivated by default.”  According to an investigation by Oasis Security, “runOn: ‘folderOpen’ auto-executes the instant a developer browses a project.” “A malicious .vscode/tasks.json turns a casual ‘open folder’ into silent code execution in the user’s context.”

With Cursor, an AI-powered version of Visual Studio Code, developers can safely view and edit code regardless of its source or author, thanks to a feature called Workspace Trust.

This image shows cursor ai

 

When this option is turned off, an attacker can publish a project on GitHub (or any other platform) and add a hidden “autorun” instruction that tells the IDE to start a task as soon as a folder is opened. This way, when the victim tries to browse the booby-trapped repository in Cursor, malicious code will start to run.

“This has the potential to leak sensitive credentials, modify files, or serve as a vector for broader system compromise, placing Cursor users at significant risk from supply chain attacks,” Erez Schwartz, a researcher at Oasis Security,

Users are recommended to access untrusted repositories in a different code editor, enable Workplace Trust in Cursor, and audit them before opening them in the tool in order to mitigate this risk.

The development coincides with the emergence of prompt injections and jailbreaks as a systemic and covert threat to AI-powered coding and reasoning agents such as Windsurf, Cline, K2 Think, and Claude Code. These techniques enable threat actors to covertly insert malicious instructions to fool the systems into executing malicious actions or leaking data from software development environments.

According to a report released last week by software supply chain security firm Checkmarx, Anthropic’s recently implemented automated security reviews in Claude Code may unintentionally expose projects to security risks by telling it to disregard vulnerable code through prompt injections, which could lead developers to push malicious or insecure code past security reviews.

“In this case, a carefully written comment can convince Claude that even plainly dangerous code is completely safe,” the business stated. “The end result: a developer – whether malicious or just trying to shut Claude up – can easily trick Claude into thinking a vulnerability is safe.”

The AI inspection process also creates and runs test cases, which is another issue. If Claude Code isn’t adequately sandboxed, this might result in a situation where malicious code is run on production datasets.

this image shows cursor

The AI business has cautioned that Claude’s new file creation and editing feature, which was just released, poses quick insertion risks because it operates in a “sandboxed computing environment with limited internet access.”

In particular, a malicious actor could “inconspicuously” insert instructions through external files or websites (also known as indirect prompt injection) to fool the chatbot into downloading and executing untrusted code or reading private information from a knowledge source connected via the Model Context Protocol (MCP).

“This means Claude can be tricked into sending information from its context (e.g., prompts, projects, data via MCP, Google integrations) to malicious third parties,” Anthropic stated. “To mitigate these risks, we recommend you monitor Claude while using the feature and stop it if you see it using or accessing data unexpectedly.”

But that’s not all. The firm also disclosed late last month that browser-using AI models, such as Claude for Chrome, are susceptible to rapid injection attacks. To counter this threat, the company has put in place a number of safeguards, which have decreased the assault success rate from 23.6% to 11.2%.

“New forms of prompt injection attacks are also constantly being developed by malicious actors,” it stated. “By uncovering real-world examples of unsafe behavior and new attack patterns that aren’t present in controlled tests, we’ll teach our models to recognize the attacks and account for the related behaviors, and ensure that safety classifiers will pick up anything that the model itself misses.”

However, these tools have also been discovered to be vulnerable to conventional security flaws, expanding the attack surface and potentially having an impact on the actual world.

WebSocket Authentication Bypass in Claude Code IDE Extensions (CVE-2025-52882, CVSS score: 8.8 An attacker could trick a user into visiting a malicious website, allowing them to connect to the user’s WebSocket server without authentication. This could lead to remote command execution on the victim’s system.
SQL Injection in Postgres MCP Server An attacker could bypass restrictions and run any SQL commands on the server, potentially causing damage or leaking data.
Path Traversal in Microsoft NLWeb A hacker could exploit this vulnerability by using a special URL to access sensitive files, like system configurations and cloud credentials, on the victim’s system.
Incorrect Authorization in Lovable (CVE-2025-48757, CVSS score: 9.3) This issue allowed attackers to access and modify arbitrary database tables of sites created using Lovable, without needing to be authenticated.
Multiple Vulnerabilities in Base44 These included open redirects, cross-site scripting (XSS), and data leakage. An attacker could access a victim’s apps, steal API keys, inject malicious code, and exfiltrate data.
Vulnerability in Ollama Desktop Due to poor cross-origin controls, an attacker could trick a user into visiting a malicious website, changing the application’s settings to intercept chats, and even alter responses using corrupted models.

“As AI-powered development speeds up, the biggest threats are usually not advanced AI attacks, but rather failures in traditional security measures,” said Imperva. “To safeguard the expanding network of ‘vibe coding’ platforms, security should be seen as a core element, not something added later.”

About The Author:

Yogesh Naager is a content marketer who specializes in the cybersecurity and B2B space.  Besides writing for the News4Hackers blogs, he also writes for brands including Craw Security, Bytecode Security, and NASSCOM.

Read More:

Pension Scam in India involves a ‘Retired Railway Engineer’s Phone Got Hacked’ now

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish