Skip to content

Research Engineer / Scientist – Robustness & Safety Training
Company | OpenAI |
---|
Location | San Francisco, CA, USA |
---|
Salary | $245000 – $440000 |
---|
Type | Full-Time |
---|
Degrees | PhD |
---|
Experience Level | Senior |
---|
Requirements
- 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness & biases.
- Hold a Ph.D. or other degree in computer science, machine learning, or a related field.
- Possess experience in safety work for AI model deployment.
- Have an in-depth understanding of deep learning research and/or strong engineering skills.
Responsibilities
- Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more.
- Implement new methods in OpenAI’s core model training and launch safety improvements in OpenAI’s products.
- Set the research directions and strategies to make our AI systems safer, more aligned and more robust.
- Coordinate and collaborate with cross-functional teams, including T&S, legal, policy and other research teams, to ensure that our products meet the highest safety standards.
- Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies.
Preferred Qualifications
- Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter.
- Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.
- Are a team player who enjoys collaborative work environments.