OpenAI Announces External AI Safety Research Fellowship Opportunities

OpenAI-Announces-External-AI-Safety-Research-Fellowship-Opportunities

Advanced AI Safety Research Fellowship Seeks Applicants

The OpenAI Safety Fellowship program is now accepting applications for a prestigious external research opportunity. This highly competitive program is open to outstanding researchers, engineers, and practitioners from diverse backgrounds who want to investigate critical safety and alignment issues related to cutting-edge AI systems.

About the Program

  • Duration: September 14, 2026, to February 5, 2027
  • Application Deadline: May 3, 2026
  • Notification Date: July 25, 2026

This initiative is designed for individuals from outside OpenAI, providing an unparalleled opportunity to collaborate with world-class experts in AI safety research. Researchers from various disciplines, including computer science, social science, cybersecurity, and human-computer interaction, are encouraged to apply.

According to OpenAI, preference will be given to those who can demonstrate empirically grounded and technically sound approaches to addressing pressing AI safety concerns.

Priorities and Benefits

  • Research Areas: Safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains
  • Fellowship Benefits: Workspace in Berkeley, remote participation allowed, expert mentorship from OpenAI staff, monthly stipend, access to computational resources, and dedicated guidance from OpenAI professionals

By the conclusion of the fellowship, each participant is expected to contribute meaningfully to the field, producing a substantial research output, such as a published paper, benchmark, or dataset.

Selection for the fellowship will prioritize the candidate’s research ability, technical acumen, and execution capacity. Academic credentials are not mandatory, but letters of reference are required as part of the application process.



About Author

en_USEnglish