Vulnerable Chrome Extension Puts AI Agent at Risk of Takeover

www.news4hackers.com-vulnerable-chrome-extension-puts-ai-agent-at-risk-of-takeover-vulnerable-chrome-extension-puts-ai-agent-at-risk-of-takeover

A Recently Discovered Vulnerability in Claude Extension

A critical security flaw has been discovered in the Chrome extension for the AI-powered chatbot Claude, potentially allowing attackers to take control of the AI agent and steal sensitive information.

The Flaw: ClaudeBleed

The vulnerability, identified as ClaudeBleed, arises from a combination of lax permissions and poorly implemented trust in the origin of commands sent to the AI agent.

According to researchers at LayerX, “The Claude extension allows any Chrome extension to interact with scripts running in the origin browser without verifying their ownership. As a result, an attacker can create an extension with a declared content script and configure it to run in the ‘Main’ world, ensuring the script is executed as part of the page.”

Attack Chain

An attacker can perform remote prompt injection and control the AI agent’s actions by sending a message to the Claude extension, which trusts the origin of the execution rather than the execution context. While Claude enforces user confirmation for sensitive actions, an attacker can manipulate the Document Object Model (DOM) to dynamically modify UI elements and alter Claude’s perception of the actions.

Vulnerability Details

  • The vulnerability effectively breaks Chrome’s extension security model by allowing a zero-permission extension to inherit the capabilities of a trusted AI assistant.
  • This attack chain enables an attacker to exfiltrate data from services such as Gmail, GitHub, or Google Drive, as well as send emails, delete data, and share documents on behalf of the user.

Patch and Further Action

When notified of the issue, Anthropic informed LayerX that they were working on a patch; however, the fix only partially addressed the underlying vulnerability. An attacker can simply switch the extension to ‘privileged’ mode and bypass the fix, as the user is never notified or asked to approve the switch.

Conclusion

Researchers at LayerX emphasize the need for improved security measures when interacting with AI agents, particularly those that have access to sensitive information. The company stresses the importance of robust permission models and secure implementation practices to prevent similar vulnerabilities in the future.



About Author

en_USEnglish