GitHub Experiences 81% Rise in AI-Service Leaks as 29M Secrets Exposed to Public
Artificial Intelligence in Software Development Leads to Surge in Sensitive Data Exposure
The increasing adoption of artificial intelligence in software development has led to a significant surge in the exposure of sensitive data, with a staggering 81% rise in AI-service leaks detected in 2025.
Report Highlights Risks Associated with AI-Assisted Coding Tools
According to a recent report by GitGuardian, a leading security firm, this trend has resulted in a record-breaking 29 million secrets being leaked on GitHub, the world’s largest open-source platform.
The report, titled “State of Secrets Sprawl,” highlights the growing risks associated with the use of AI in software development, particularly in the context of non-human identities (NHIs) and their secrets. NHIs refer to automated systems, such as service accounts and applications, that interact with software systems.
Key Findings of the Report
The study found that the use of AI-assisted coding tools, such as Claude Code, has increased the speed of software creation, but also amplified the risk of secret leaks. Specifically, the report notes that AI-assisted commits leaked secrets at a rate of 3.2%, which is twice the baseline rate.
Another significant finding is the rapid growth of AI-service credentials leaks, which increased by 81% year-over-year, reaching 1,275,105 incidents. This trend is particularly concerning, as AI services are becoming increasingly integrated into software development workflows.
Risks Associated with Internal Repositories and Collaboration Tools
The report also highlights the risks associated with internal repositories, which are six times more likely to contain hardcoded secrets than public repositories. Additionally, the study found that 28% of incidents originate from leaks in collaboration and productivity tools, such as Slack and Trello, where credentials can be exposed to broader audiences.
GitGuardian’s CEO, Eric Fourrier, emphasized the need for security teams to prioritize the protection of developer machines, which are becoming an increasingly critical part of the credential perimeter. “AI agents need local credentials to connect across systems, turning developer laptops into a massive attack surface,” he said.
Recommendations for Security Teams
The report also highlights the need for better governance and remediation strategies to address the growing problem of secret leaks. Specifically, the study found that 60% of policy violations are credentials that persist over time, and 46% of critical secrets have no vendor-provided validation mechanism.
The report concludes that the industry is facing a growing debt and needs to prioritize NHI governance, not just detection. GitGuardian recommends that security teams treat non-human identities as first-class assets, with dedicated governance, context, and remediation automation across code and non-code surfaces.
Urgent Need for Security Teams to Adapt
Overall, the report highlights the urgent need for security teams to adapt to the changing landscape of software development and prioritize the protection of sensitive data in the face of increasing AI adoption.
