Advanced Cyber Threats

WormGPT, a newly discovered generative AI cybercrime tool, enables adversaries to conduct complex phishing and BEC attacks.  The technology streamlines the development of false emails that are very convincing and tailored for the recipient, boosting the likelihood that the attack will succeed.

Going Through Details

WormGPT, an AI program created in 2021 and based on the GPTJ language paradigm, has a number of noteworthy features.  These include comprehensive character support, the capacity to format code, and conversation memory preservation.

  • Tools like WormGPT can be powerful weapons in the hands of threat actors, especially in light of the fact that companies like Google Bard and OpenAI ChatGPT are stepping up their efforts to prevent the abuse of Large Language Models (LLMs) for the creation of malicious code and false phishing emails.
  • In comparison to ChatGPT, Bard has much fewer cybersecurity protections against abuse, based on recent research by Check Point. Bard’s features make it simpler to create malicious content as a result.

Generative AI for BEC attacks

  • The capacity of generative AI to create emails with perfect grammar reduces the likelihood of skepticism by providing them with the illusion of genuineness.
  • The use of complex BEC assaults becomes more widely accessible with the adoption of generative AI. This technology makes it possible for even those with little technical knowledge to use it, which gives a larger variety of crooks easy access to it.

Latest attacks leveraging ChatGPT

  • Cyberattacks targeting ChatGPT-related websites spiked in May, and rogue domains have been appearing more frequently lately.
  • One in every 25 fresh ChatGPT-related domains registered since the start of 2023 has been malicious or possibly harmful, and the number of these cyberattack attempts has gradually increased in recent months.
  • Cybercriminals were discovered in April using ChatGPT and Google Bard’s expanding popularity to spread malware, with recent attempts using the RedLine stealer virus via phony Facebook posts.
  • They took advantage of the excitement around AI language algorithms to market these false posts using hacked Facebook business accounts, luring users into downloading files.

Wrapping Up

In the bottom line, AI offers fresh methods of attack as it develops.  Strong preventative strategies must be put in place.  Organizations should create modernized training curricula to defend against BEC attacks improved by AI.  Strong email verification procedures provide protection from AI-driven phishing and BEC attempts.

About The Author:

Yogesh Naager is a content marketer that specializes in the cybersecurity and B2B space.  Besides writing for the News4Hackers blog, he’s also written for brands including CollegeDunia, Utsav Fashion, and NASSCOM.  Naager entered the field of content in an unusual way.  He began his career as an insurance sales executive, where he developed an interest in simplifying difficult concepts.  He also combines this interest with a love of narrative, which makes him a good writer in the cybersecurity field.  In the bottom line, he frequently writes for Craw Security.

Cyber Security course

Kindly read other news articles:

Data Leak related to the customers of Virustotal Cyber Security Services

Online Scam Costs ₹5.4 lakhs A Man in Mengaluru

 

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *

en_USEnglish
Open chat
Hello
Can we help you?