OpenClaw Vulnerability Exposes AI Agents to Website Hijacking Attacks

OpenClaw-Vulnerability-Exposes-AI-Agents-to-Website-Hijacking-Attacksdata

OpenClaw AI Assistant Vulnerability

A vulnerability in the OpenClaw AI assistant has been discovered, allowing attackers to hijack agents by luring victims to malicious websites.

What is OpenClaw?

OpenClaw is a self-hosted AI agent that runs a local WebSocket server, acting as a gateway for authentication, orchestration, and configuration management.

The Vulnerability

The gateway binds to localhost by default, assuming local access is inherently trusted. However, this assumption proved to be a vulnerability, as attackers could exploit it by visiting malicious websites.

The Attack Scenario

The attack scenario involves JavaScript code on a malicious website opening a WebSocket connection to the OpenClaw gateway using its port. Since localhost connections are not blocked by the browser’s cross-origin policies, the attacker can then brute-force the password, which is not rate-limited by the gateway.

This allows the attacker to gain an authenticated session with administrator privileges, giving them full control of OpenClaw.

Consequences of the Vulnerability

With this level of access, the attacker can interact with the agent, extract configurations, enumerate nodes, and read logs. This enables them to instruct the agent to search for sensitive information, such as API keys, read private messages, exfiltrate files, or execute arbitrary shell commands on paired nodes.

Discovery and Resolution

The vulnerability was discovered by Oasis Security, which notes that the attack can be carried out without the need for malicious extensions or user interaction.

The OpenClaw security team has addressed the vulnerability in version 2026.2.25 and later. Users are advised to update to the latest version to prevent exploitation.

Conclusion

The vulnerability highlights the importance of securing AI assistants and agents, as they can provide a gateway to sensitive information and systems.

The incident serves as a reminder of the importance of implementing robust security measures, including rate limiting and authentication, to prevent unauthorized access to AI agents and assistants.



About Author

en_USEnglish