HackerOne Clarifies AI Training Stance Amid Researcher Concerns Over Data Security and Ethics
HackerOne Addresses Concerns Over AI Training Data
Security researchers recently raised concerns that their bug bounty submissions might be used to train artificial intelligence models at HackerOne, a prominent bug bounty platform. The concerns arose following the launch of the company’s Agentic PTaaS, which combines autonomous agents with human expertise.
HackerOne’s Response
In response to these concerns, HackerOne CEO Kara Sprague clarified that the company does not use researcher submissions or confidential customer data to train its AI agents, either internally or through third-party services. Sprague emphasized that the company’s AI system, Hai, is designed to augment human researchers by accelerating outcomes such as validated reports and rewards, rather than replacing them.
Industry Response
The controversy highlights the growing importance of transparency in the use of AI models in the cybersecurity industry. Other bug bounty platforms, including Intigriti and Bugcrowd, have also reaffirmed their policies against using researcher data for AI model training. These companies also emphasized their commitment to holding researchers accountable for their use of AI tools and ensuring that automated outputs meet submission standards.
Broader Implications
HackerOne’s clarification comes at a time when the use of AI and machine learning models is becoming increasingly prevalent in the cybersecurity industry. As these technologies continue to evolve, it is likely that concerns around data usage and transparency will remain a key issue.
The incident serves as a reminder of the need for clear policies and guidelines around the use of AI models in the cybersecurity industry. By prioritizing transparency and accountability, companies can help build trust with their researcher communities and ensure the responsible development of AI technologies.
