AI Platforms Vulnerable to Stealthy Malware Communication: A Growing Cybersecurity Concern

AI-Platforms-Vulnerable-to-Stealthy-Malware-Communication-A-Growing-Cybersecurity-Concerndata

Researchers Discover Novel Technique to Leverage AI Platforms for Stealthy Command-and-Control Activity

Researchers at Check Point have discovered a novel technique used by threat actors to leverage AI platforms as a stealthy relay for command-and-control (C2) activity. This approach enables attackers to communicate with compromised machines without being detected by traditional security measures.

Exploiting AI Assistants

The researchers found that AI assistants, such as Grok and Microsoft Copilot, can be exploited to intermediate C2 activity due to their web browsing and URL-fetching capabilities. By instructing the AI agent to fetch an attacker-controlled URL, the malware can receive the response in the AI’s output, effectively creating a bidirectional communication channel.

Demonstrating the Technique

To demonstrate this technique, the researchers created a proof-of-concept (PoC) that uses the WebView2 component in Windows 11 to interact with the AI service. Even if the WebView2 component is missing on the target system, the attacker can deliver it embedded in the malware. The researchers used a C++ program to open a WebView pointing to either Grok or Copilot, allowing the attacker to submit instructions to the assistant.

Creating a Trusted Communication Channel

The webpage responds with embedded instructions that the attacker can change at will, which the AI extracts or summarizes in response to the malware’s query. The malware then parses the AI assistant’s response and extracts the instructions. This creates a trusted communication channel via the AI service, which can carry out data exchanges without being flagged or blocked by internet security tools.

Abusing AI Services

The researchers note that this technique is particularly effective because it does not require an account or API keys for the AI services, making it difficult to track and block the attacker’s infrastructure. Additionally, the usual safeguards that block obviously malicious exchanges on AI platforms can be easily bypassed by encrypting the data into high-entropy blobs.

Microsoft’s Response

Microsoft has been contacted to comment on whether Copilot is still exploitable in the way demonstrated by Check Point and what safeguards could prevent such abuse.

According to the researchers, AI can be abused in multiple ways, including operational reasoning such as assessing the target system’s worth and how to proceed without raising alarms. This technique is just one example of how AI services can be exploited by threat actors.



About Author

en_USEnglish