Major Data Breach at Jerry’s Store Exposes 345,000 Credit Cards

www.news4hackers.com-major-data-breach-at-jerry-s-store-exposes-345-000-credit-cards-major-data-breach-at-jerry-s-store-exposes-345-000-credit-cards

Cybersecurity Researchers Uncover Significant Vulnerability in AI-Generated Code

Cybersecurity researchers have discovered a critical weakness in the use of artificial intelligence (AI) generated code, leading to the exposure of over 345,000 stolen credit card records on the internet.

Fake Carding Platform Compromised by Security Flaw

The compromised system, known as “Jerry’s Store,” was a fake carding platform used for trading and testing stolen payment cards. An investigation conducted by Cybernews revealed that the leak was caused by a security flaw in the AI-generated code, which allowed access to sensitive information, including cardholder names, numbers, expiration dates, and billing addresses.

Leaked Database Contains Valid and Invalid Credit Cards

The platform, which operated as an underground marketplace for stolen payment cards, was built using an AI coding assistant called “Cursor.” Unfortunately, the AI-generated setup created a web directory structure without access controls or authentication security, leaving the database open to anyone who could locate the server. The leaked database contained nearly 200,000 cards that were already marked as “invalid,” while approximately 145,000 cards were still active and usable.

Potential Monetary Value of Exposed Data

Experts estimate that valid stolen credit cards are commonly sold on dark web marketplaces for between $7 and $18 each. Based on this estimate, the exposed database could be worth millions of dollars in illicit underground markets. The stolen financial data can be used for various malicious activities, including online purchases, identity theft, and unauthorized financial transactions.

How Cybercriminals Verify Stolen Cards

Researchers also uncovered how cybercriminals verify whether stolen cards remain active before selling them. They allegedly used legitimate e-commerce and online service platforms, including Amazon, Grubhub, Temu, Lyft, and Sam’s Club, to run small test transactions. If a payment succeeded, the card was marked as “valid” and later sold at higher prices on dark web networks.

Former IPS officer Professor Triveni Singh stated, “AI tools have significantly lowered the barrier for cybercrime, making it easier for individuals to deploy sophisticated fraud infrastructure.”

Importance of Implementing Strict Security Measures

Security experts warn that AI-powered automation is becoming a new tool for cybercriminals, allowing even individuals with limited technical knowledge to create sophisticated fraud platforms using AI assistance. Technology experts emphasize the importance of implementing strict security audits, penetration testing, and manual verification before deploying AI-generated systems online. They advise consumers to regularly monitor their bank accounts and credit card statements, enabling SMS and transaction alerts, and immediately blocking cards if suspicious activity is detected.

Companies Must Prioritize Security Measures

Companies and software developers must prioritize security measures to protect against the risks associated with AI-generated code. This includes regular security updates, patches, and monitoring for potential vulnerabilities. By taking proactive steps, organizations can minimize the risk of falling victim to AI-powered cyber attacks.




About Author

en_USEnglish