North Korean Hackers Use ChatGPT to make Deepfake IDs for Phishing South Korea Military

0
North Korean hackers using ChatGPT to create deepfake IDs for phishing attacks targeting South Korea's military.

North Korean Hackers Use ChatGPT to make Deepfake IDs for Phishing the South Korean Military

“A spear-phishing attempt on a South Korean defense agency by North Korean Kimsuky hackers using AI-generated deepfake IDs brought attention to the growing abuse of AI in cyberthreats aimed at national security.”

According to a report released on Monday, a hacker gang with ties to North Korea used deepfake images created by artificial intelligence (AI) to launch a cyberattack on South Korean organizations, including one that deals with defense.

According to a July investigation by the Genians Security Center (GSC), a South Korean security center, the Kimsuky gang, a hacker outfit thought to be supported by the North Korean government, conducted a spear-phishing attack on a military-related organization, according to Yonhap news agency.

Spear-phishing is a type of targeted hack that is frequently carried out by sending individualized emails that mimic those of reliable sources.

this image shows north korean hackers

Report

Disguised as correspondence regarding the issuing of identification documents for officials with military affiliations, the attackers sent an email attachment containing malicious malware. The attempt’s ID card image was thought to have been created by a generative AI model, demonstrating the usage of deepfake technology by the Kimsuky group.

 

As part of a larger plan to get around international sanctions and obtain foreign exchange for the regime, the personnel created fake virtual identities in order to pass technical tests during the hiring process.

 

According to GSC, these incidents demonstrate North Korea’s expanding efforts to use AI services for more complex and malevolent purposes.

 

“Although AI services are effective instruments for increasing productivity, they also pose possible risks when abused as national security-level cyber threats.”

 

“As a result, companies need to be ready for the potential for AI abuse and keep an eye on security throughout hiring, operations, and business procedures.”

This image shows north korean hackers

Because government-issued identification documents are legally protected, AI systems like ChatGPT typically deny requests to create duplicates of military IDs.

But according to the GSC report, the hackers seem to have gotten over the limits by asking for mock-ups or sample designs for “legitimate” purposes instead of exact replicas of real IDs.

The results come after a different report detailing the misuse of AI by North Korean IT personnel was released in August by US-based Anthropic, the company that developed the AI service Claude.

this image shows Cyber Security Add

About The Author

Suraj Koli is a content specialist in technical writing about cybersecurity & information security. He has written many amazing articles related to cybersecurity concepts, with the latest trends in cyber awareness and ethical hacking. Find out more about “Him.”

Read More:

WhatsApp vs. India’s Regulator: A Conflict Over Market Power and Data Privacy

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish