Research Engineer/Scientist – Cyber – ML
Company | Anthropic |
---|---|
Location | San Francisco, CA, USA, New York, NY, USA |
Salary | $280000 – $340000 |
Type | Full-Time |
Degrees | |
Experience Level | Senior, Expert or higher |
Requirements
- Deep domain expertise in cybersecurity, particularly in offensive security, vulnerability research, or cyber defense
- Hands-on keyboard offensive security experience including penetration testing, red teaming, or exploit development
- Experience with cyber threat characterization, risk assessment, or cybersecurity policy development
- Experience in applying ML to security problems, such as malware analysis or intrusion detection
- Familiarity with ICS/SCADA systems and critical infrastructure security
- Possession of security certifications such as SANS certifications, OSCP, or similar credentials
- Experience fine-tuning large language models for specialized domains
- Strong foundation in both security techniques and modern ML frameworks (PyTorch/TensorFlow)
- Experience translating complex technical findings into policy recommendations
- Ability to work with sensitive information while maintaining appropriate security protocols
- Experience with government cybersecurity programs or national security applications
- Ability to operate effectively in fast-paced environments while maintaining technical rigor
- Published research in relevant cybersecurity or AI security domains
Responsibilities
- Apply ML/AI research to build evaluation systems for cybersecurity safety, with focus on attack pattern characterization and threat detection
- Design and train specialized AI models for cyber threat classification, leveraging network data, exploit analysis, and attack protocols
- Work with SG research to design and implement state-of-the-art ML approaches for identifying dual-use cyber capabilities and zero-day exploits
- Build systems that target the transition of digital cyber capabilities into real-world attacks, preventing malicious exploitation
- Create and implement technical systems for monitoring emerging cyber threats and AI-enabled attack vectors
- Develop classifiers that can distinguish between legitimate security research and potential offensive cyber operations
- Build sophisticated evaluation infrastructure for measuring AI capability uplift in cybersecurity domains
- Design adversarial testing frameworks that probe model capabilities in cybersecurity contexts
- Integrate cyber range validation data with ML training pipelines to improve classifier accuracy
- Develop and maintain cyber threat datasets and benchmarks while ensuring appropriate information security
- Create tools that allow cybersecurity experts to quickly develop and deploy new threat detection evaluations
- Write production-quality Python code for high-throughput security data processing and evaluation systems
- Contribute to cyber risk assessments that directly inform AI model release decisions and policy development
- Work cross-functionally with cybersecurity policy experts, security researchers, and ML engineering teams
Preferred Qualifications
- Domain expertise in all specific risk areas is not required
- 100% of the skills needed to perform the job are not required
- Prior experience with AI model evaluation is not required