Microsoft Warns: AI is Breeding a New Race of Invisible Cybercriminals

Microsoft Warns: AI is Breeding a New Race of Invisible Cybercriminals
Microsoft claims to have banned 1.6 million bot signups per hour, rejected over 49,000 partnership fraud applications, and stopped over $4 billion in fraud attempts between April 2024 and April 2025. These results from Microsoft’s Cyber Signals study imply that although defenses are getting stronger, attackers are also changing, and artificial intelligence is now their most potent tool.
According to Microsoft’s threat intelligence, criminals are employing AI to create bogus job interviews, impersonate tech support employees, and create fraudulent websites. They are also using social engineering techniques and AI-generated language to avoid detection. Victims are finding it more difficult to identify automated phishing emails, phony support portals, and AI-generated voice cloning.
Fake E-Commerce Stores and Job Offers in Minutes
With AI-generated product descriptions, stock photos, customer reviews, and even interactive chatbots, fraudsters can now create realistic-looking phony stores in a matter of minutes. Using prepared responses from AI-powered customer support bots, these criminal e-commerce operations trick users into buying fictitious items or services, frequently delaying refund claims.
Additionally, job scams have advanced considerably. In order to get financial credentials and personal information, fraudsters use generative AI to create job postings that appear authentic, hold simulated interviews, and send automated email offers. Microsoft reports an increase in these scams, which specifically target job applicants with little to no experience.
Tech Support Impersonation and Microsoft’s Countermeasures
Tech support fraud is another issue, as threat actors remotely access equipment while posing as IT support agents using solutions like Microsoft Quick Assist. Microsoft has discovered that organizations like Storm-1811 are using AI-powered voice phishing (vishing) tactics to trick consumers into giving them access.
In order to combat this, Microsoft has implemented warning systems that notify users of possible fraud and currently bans 4,415 dubious Quick Assist connection attempts every day. To prevent scammers from breaching defenses, features like real-time threat monitoring and digital fingerprinting are being implemented.
Microsoft has implemented guidelines mandating that all internal product teams incorporate fraud detection and prevention procedures into their designs as part of its Secure Future Initiative. In order to cooperate with governments and law enforcement agencies worldwide, it additionally joined the Global Anti-Scam Alliance.
About The Author:
Yogesh Naager is a content marketer who specializes in the cybersecurity and B2B space. Besides writing for the News4Hackers blogs, he also writes for brands including Craw Security, Bytecode Security, and NASSCOM.
READ MORE HERE
Gmail’s Security Filters Bypassed for Sophisticated Phishing Attacks by Hackers
Hackers Execute Malicious Code Silently Via the WinZip MotW Bypass Vulnerability