Artificial Intelligence Integration in Modern Crime Scene Investigations: Enhancing Forensic Analysis and Evidence-Based Decision Making
Criminals are increasingly incorporating artificial intelligence (AI) into their daily operations
According to a recent study that analyzed conversations on underground forums. The research, which spanned seven months and examined 163 discussion threads on 21 forums, found that AI tools are being used for a range of malicious activities, including drafting phishing emails, generating code snippets, and coaching social engineering calls.
The study identified four main themes in the discussions
Repurposing mainstream AI services, marketing criminal AI products, adapting models for specific operations, and debating operational risk. Commercial chatbots, such as ChatGPT and DeepSeek, were found to be widely used as a starting point for many participants. Open-source and locally hosted models were also popular, with users running them offline to draft scripts, refine phishing language, and explore attack concepts.
The study also found that a number of AI-powered tools are being marketed specifically for fraud, spam, and malware
These tools, such as WormGPT and FraudGPT, function as wrappers that resell access to mainstream models through a bot interface or API gateway paired with a jailbreak prompt. Sellers also advertised custom development services, including hosting large language models for clients lacking infrastructure.
“We could monitor forums, markets, and Telegram channels to assess what share of malicious products and services on sale claim to be powered or enabled by AI,” Dupont said. “This claim is often central to securing a competitive advantage, so sellers are unlikely to obfuscate this in their offerings.”
The study found that higher-skill discussions focused on adapting AI to specific workflows
Such as using chatbots to rehearse social engineering scripts tailored to a target organization. Call center automation was also a prominent theme, with posts detailing virtual assistants that support human operators in real-time.
“Social engineering and scamming operations will probably be able to leverage AI capacities more systematically, profitably, and sooner than malware writing operations in the near future at least,” he said.
However, skepticism about the reliability of AI-generated code for complex offensive tasks was a common theme in the discussions
Participants also expressed concerns about operational security, including the risk of logging, hidden backdoors, and potential interception of stolen data.
“Any fraud signal that scales up and demonstrates high levels of coordination should be examined carefully to determine whether AI tools are at play,” he said.
