Infostealer Malware Steals OpenClaw AI Agent Config Files and Gateway Tokens
Infostealer Targets OpenClaw AI Agent Configuration Files and Gateway Tokens
A recent cybersecurity incident has highlighted the evolving threat landscape of information stealers, which are increasingly targeting artificial intelligence (AI) agents. Researchers have discovered a case where an infostealer successfully exfiltrated a victim’s OpenClaw AI agent configuration environment, marking a significant milestone in the evolution of infostealer behavior.
The malware used a broad file-grabbing routine to capture sensitive files, including openclaw.json, device.json, and soul.md. These files contain critical information, such as the OpenClaw gateway token, cryptographic keys, and details of the agent’s core operational principles.
The theft of the gateway authentication token can allow an attacker to connect to the victim’s local OpenClaw instance remotely or masquerade as the client in authenticated requests to the AI gateway. Hudson Rock noted that the malware may have been looking for standard “secrets,” but inadvertently captured the entire operational context of the user’s AI assistant.
The Growing Threat of Infostealers Targeting AI Agents
This incident highlights the growing threat of infostealers targeting AI agents, which are becoming increasingly integrated into professional workflows. As AI agents like OpenClaw become more prevalent, infostealer developers may release dedicated modules specifically designed to decrypt and parse these files, similar to how they do for Chrome or Telegram today.
Response to Security Issues
The disclosure comes as OpenClaw’s maintainers have announced a partnership with VirusTotal to scan for malicious skills uploaded to ClawHub, establish a threat model, and add the ability to audit for potential misconfigurations. This move is in response to security issues with OpenClaw, including a recent campaign that uses a new technique to bypass VirusTotal scanning by hosting malware on lookalike OpenClaw websites.
In addition, research has found hundreds of thousands of exposed OpenClaw instances, likely exposing users to remote code execution (RCE) risks. RCE vulnerabilities allow an attacker to send a malicious request to a service and execute arbitrary code on the underlying system, potentially becoming a pivot point for further attacks.
Prioritizing Security
As AI agents continue to grow in popularity, it is essential for users to prioritize security and take steps to protect their instances from potential threats. This includes implementing robust security measures, such as regular updates, secure configurations, and monitoring for suspicious activity.
