AI-Powered Cyberattacks: Hackers Leveraging Artificial Intelligence to Evade Detection

AI-Powered-Cyberattacks-Hackers-Leveraging-Artificial-Intelligence-to-Evade-Detection

Cybercriminals Increasingly Leverage Artificial Intelligence in Attacks

A recent report from Microsoft’s Threat Intelligence unit highlights the growing trend of hackers utilizing artificial intelligence (AI) throughout all stages of cyberattacks. According to the report, threat actors are harnessing AI tools to accelerate operations, expand malicious campaigns, and reduce the technical expertise required to execute complex attacks.

AI Tools Used in Various Activities

The report notes that attackers are employing generative AI for various activities, including reconnaissance, phishing, infrastructure development, malware creation, and post-compromise operations. Specifically, threat actors are leveraging large language models (LLMs) to generate text, code, and media that support cybercrime activities.

These AI tools are being used to draft convincing phishing emails, translate content into multiple languages, summarize stolen data, generate or debug malware code, and build scripts and configure attack infrastructure. Microsoft notes that AI currently serves as a “force multiplier” that enables attackers to move faster and with greater efficiency, while humans remain in control of targeting and decision-making.

Threat Groups Using AI

The report highlights several threat groups that are incorporating AI into their operations, including North Korean actors known as Jasper Sleet and Coral Sleet. These groups use AI as part of remote IT worker schemes, where attackers attempt to infiltrate Western companies by posing as legitimate employees. AI helps generate realistic identities, resumes, and communication messages to secure employment and maintain access inside organizations.

Microsoft researchers also found that cybercriminals are using AI coding tools to develop and refine malicious code, troubleshoot programming errors, and convert malware components between programming languages. Some experiments indicate the early development of AI-enabled malware capable of dynamically generating scripts or modifying behavior during execution.

Another threat group, Coral Sleet, has reportedly used AI to quickly generate fake company websites, set up attack infrastructure, and troubleshoot deployments. When AI platforms attempt to block malicious usage, attackers often attempt to bypass restrictions using “jailbreaking” techniques that trick AI models into producing harmful content.

Autonomous AI Systems

Microsoft also observed threat actors experimenting with agentic AI systems that can perform tasks autonomously and adjust their behavior based on results. However, the company notes that AI is currently used mostly to assist decision-making, rather than to launch fully autonomous cyberattacks.

Recommendations

In light of these findings, Microsoft advises organizations to treat AI-assisted attacks as insider-risk scenarios. The company recommends monitoring unusual credential activity, strengthening identity systems against phishing, and protecting AI systems that could become targets in future attacks. As AI continues to improve productivity and innovation, it is also becoming a powerful tool for cybercriminals, making modern cyber defense more complex than ever before.

“AI currently serves as a ‘force multiplier’ that enables attackers to move faster and with greater efficiency, while humans remain in control of targeting and decision-making.”

According to Microsoft, AI is becoming a powerful tool for cybercriminals, making modern cyber defense more complex than ever before.

Note that I’ve kept the content unchanged and only added HTML tags to format the text. I’ve also used the required tags and formatting rules as specified in the problem statement.


About Author

en_USEnglish