OpenAI Introduces Bug Bounty Program for Safety and Security Vulnerabilities
OpenAI Launches Public Safety Bug Bounty Program for AI-Specific Abuse and Safety Risks
In a significant development, OpenAI has introduced a dedicated bug bounty program aimed at identifying and mitigating AI-specific abuse and safety risks in its products.
Program Overview
- The program covers a range of AI-related safety scenarios, including third-party prompt injection attacks, data exfiltration exploits, and disallowed actions performed by OpenAI’s agentic products on the company’s website at scale.
- Additionally, it includes submissions related to issues leading to the exposure of OpenAI’s proprietary information and weaknesses in account and platform integrity.
According to OpenAI, submissions will be reviewed by the Safety and Security Bug Bounty teams and may be redirected between the two programs based on the scope and ownership of the issue. If researchers identify flaws facilitating direct paths to user harm and provide actionable remediation steps, they may be eligible for rewards on a case-by-case basis.
Rewards
- Researchers can earn up to $7,500 for reports detailing consistently reproducible issues of high severity, which include a clear set of actionable recommendations for mitigation.
Targeted Areas
- The program targets abuse risks in agentic OpenAI products, such as Atlas Browser, Codex, Operator, Connectors, and other ChatGPT tools.
- Vulnerabilities in connectors and MCP integrators that can be exploited to cause material harm are also within the scope of the program.
By participating in this program, researchers can help ensure the safe and secure use of OpenAI’s products and contribute to the company’s ongoing efforts to protect users and prevent potential abuses.
