WormGPT Chatbot Doesn’t Say “No” To Hackers Know Why?
“Now hackers can start practicing their skills freely with the help of WormGPT Chatbot, because it doesn’t say no.”
In addition to changing businesses, generative AI is accelerating the criminal underworld. Unguarded models, such as WormGPT and DarkGPT, are exposing the globe to an era of industrialized cybercrime and arming hackers with automation that was previously unthinkable.
The Inception of “Dark AI”
Seven months after OpenAI unveiled ChatGPT’s “research preview” in June 2023, WormGPT, a lesser-known chatbot, surreptitiously surfaced online, targeting hackers rather than students or programmers.
Its developer provided a simplified large language model (LLM) devoid of moral filters and safety checks. For €500 (₹51,070) a month, users could create scam emails, malware programs, or phishing templates without any of the polite rejections that are common in chatbots.
More than 200 people signed up for WormGPT in a matter of months, and some of them paid thousands for private installations. The developer, Rafael Morais, was later identified by security expert Brian Krebs. He stated that his program was intended to be “neutral and uncensored,” not illegal.
WormGPT’s design, however, was self-evident; it was tailored for users who required a chatbot that didn’t say “no.”

Misused AI vs. Dark AI
Guardrails are built into mainstream AI systems to prevent outputs that are damaging or unlawful. However, as cybersecurity specialists note, those protections are easily evaded. A chatbot may be tricked into generating virus instructions or fraud templates by a request masquerading as “fictional writing.”
According to a 2025 arXiv study, 14 out of 17 top LLMs examined were susceptible to “inter-agent trust” attacks, which allow one AI system to control another by assuming mutual trust by default.
The cyber underground now has new powers thanks to the emergence of “dark AI” tools like FraudGPT, Evil-GPT, XXXGPT, and keanu-WormGPT. Distributed via Telegram channels and darknet markets, these clones offer unfettered models that can spoof emails, create ransomware code, or imitate human writing styles in order to avoid detection.
Using Fire to Combat Fire
According to security experts, fighting AI with AI may be the only effective strategy to combat this new wave of AI-driven criminality.
Crystal Morin, Senior Strategist, Sysdig, Former Intelligence Analyst, U.S. Air Force
| “Now, anyone with a GPU and a modicum of technical expertise can modify a model for malevolent purposes. Threat actors are evading protections in just this manner.” |
Cloud exploits have increased, ransomware assaults have increased, and the average cost of a data breach has risen to all-time highs in the two years since the first dark models were made public.
These days, tech-savvy thieves may self-host open-source models and train them using enormous databases of malware leaks, phishing kits, and credentials that have been stolen. The end result is AI tools that can chillingly efficiently automate whole hacking workflows, from reconnaissance to payload delivery.

The Unrung Bell
According to experts, the unsettling reality is that there is no turning back. Creation and hence destruction have become more accessible thanks to generative AI. The models cannot be returned or re-caged once they are made publicly available.
Analyst
| “We can’t unring the AI bell.” Once requiring a lot of skill and risk, it now only requires a prompt. The distinction between hackers and hobbyists has become more hazy due to the combination of open LLMs, leaked datasets, and readily available hardware. |
The concern now is not if AI will be misused, but rather how societies will adjust to a future in which every breakthrough has its own exploit code, as regulators work to control the fallout.
About The Author
Suraj Koli is a content specialist in technical writing about cybersecurity & information security. He has written many amazing articles related to cybersecurity concepts, with the latest trends in cyber awareness and ethical hacking. Find out more about “Him.”
Read More:
Insurance Scam of ₹100 Crore Exploited Across 12 States Involving ‘Dead’ and Alive Victims