Staff Red Team Specialist – Safeguards
Company | Anthropic |
---|---|
Location | San Francisco, CA, USA, New York, NY, USA |
Salary | $275000 – $355000 |
Type | Full-Time |
Degrees | |
Experience Level | Senior, Expert or higher |
Requirements
- Demonstrated experience in penetration testing, red teaming, or application security
- Strong technical skills in web application security, including hands-on expertise with security testing tools (Burp Suite, Metasploit, custom scripting frameworks, etc.)
- A track record of discovering novel attack vectors and chaining vulnerabilities in creative ways
- Experience with security testing tools and the ability to build custom automation
- Strong written and verbal communication skills, with the ability to explain technical concepts to varied audiences
- Proven ability to think like an attacker
Responsibilities
- Conduct comprehensive adversarial testing across Anthropic’s product surfaces, developing creative attack scenarios that combine multiple exploitation techniques
- Research and implement novel testing approaches for emerging capabilities, including agent systems, tool use, and new interaction paradigms
- Design and execute ‘full kill chain’ attacks that emulate real-world threat actors attempting to achieve specific malicious objectives
- Build and maintain systematic testing methodologies that evaluate every aspect of our systems.
- Develop automated testing frameworks to enable continuous assessment at scale
- Collaborate with Product, Engineering, and Policy teams to translate findings into concrete improvements
- Help establish metrics for measuring detection effectiveness of novel abuse
Preferred Qualifications
- Experience with AI/ML security or adversarial machine learning
- Experience testing API security and rate limiting systems
- Background in testing business logic vulnerabilities and authorization bypass techniques
- Background in anti-fraud, trust & safety, or abuse prevention systems
- Familiarity with distributed systems and infrastructure security
- Understanding of AI safety considerations beyond traditional security