State-Sponsored Hackers Exploit Google’s Gemini AI
State-Sponsored Hackers Exploit Google’s Gemini AI for Multi-Stage Attacks
Cyber attackers, including nation-state actors from China, Iran, North Korea, and Russia, have been leveraging Google’s Gemini AI model to support various stages of their attacks, ranging from reconnaissance and phishing to post-compromise actions.
Attackers Leverage Gemini for Various Tasks
According to a report by the Google Threat Intelligence Group (GTIG), these adversaries are utilizing Gemini for tasks such as target profiling, open-source intelligence gathering, generating phishing lures, translating text, coding, vulnerability testing, and troubleshooting.
The GTIG notes that Advanced Persistent Threat (APT) adversaries are employing Gemini to support their campaigns, from initial reconnaissance to command and control (C2) development and data exfiltration.
For instance, Chinese threat actors used Gemini to automate vulnerability analysis and create targeted testing plans for specific US-based targets. They also employed the model to analyze Remote Code Execution (RCE), Web Application Firewall (WAF) bypass techniques, and SQL injection test results.
Iranian Threat Actor APT42 Leverages Gemini for Social Engineering
Iranian threat actor APT42 leveraged Gemini for social engineering campaigns, using the model as a development platform to speed up the creation of tailored malicious tools. The group utilized Gemini for debugging, code generation, and researching exploitation techniques.
Additionally, threat actors from China and other nations used Gemini to implement new capabilities into existing malware families, including the CoinBait phishing kit and the HonestCue malware.
HonestCue Malware Framework Uses Gemini API
The HonestCue malware framework, observed in late 2025, uses the Gemini API to generate C# code for second-stage malware, which is then compiled and executed in memory.
CoinBait, a React SPA-wrapped phishing kit, masquerades as a cryptocurrency exchange for credential harvesting and contains artifacts indicating the use of AI code generation tools.
Cybercriminals Use Generative AI Services in ClickFix Campaigns
Cybercriminals are also using generative AI services in ClickFix campaigns to deliver the AMOS info-stealing malware for macOS. Users are lured into executing malicious commands through malicious ads listed in search results for queries on troubleshooting specific issues.
Google Flags AI Model Extraction and Distillation as a Threat
The GTIG report highlights that Gemini has faced attempts at AI model extraction and distillation, where organizations leverage authorized API access to query the system and reproduce its decision-making processes.
This technique, known as “knowledge distillation,” enables attackers to accelerate AI model development at a lower cost. Google flags these attacks as a threat, as they constitute intellectual theft and undermine the business model of AI-as-a-service.
In one instance, Gemini AI was targeted by 100,000 prompts aimed at replicating the model’s reasoning across a range of tasks in non-English languages.
Google has disabled accounts and infrastructure tied to documented abuse and implemented targeted defenses in Gemini’s classifiers to make abuse harder.
The company assures that it designs AI systems with robust security measures and strong safety guardrails, regularly testing the models to improve their security and safety.
