Senior Director AI Security
Company | Protegrity |
---|---|
Location | Palo Alto, CA, USA, Stamford, CT, USA |
Salary | $245000 – $400000 |
Type | Full-Time |
Degrees | |
Experience Level | Expert or higher |
Requirements
- 12-15 years of experience in AI/ML, security, governance, and safety with 2+ years specializing in GenAI and LLMs.
- Expertise in building and optimizing AI/LLM-based data classification systems.
- Expertise in using AI/LLM-based techniques for trust, risk, safety and security guardrails of GenAI systems.
- Proven experience developing secure and privacy-preserving AI solutions, with knowledge and use of tools like LangChain, Hugging Face, OpenAI APIs, etc.
- In-depth understanding of AI lifecycle management, including bias mitigation, explainability, and robustness testing.
- Advanced programming skills in Python and potentially other languages like RUST, Java, and Go.
- Practical knowledge of frameworks like TensorFlow, PyTorch, and Keras.
- Familiarity with MLOps pipelines and cloud-native tools (e.g., Sagemaker, Vertex AI, Aure Machine Learning, Azure OpenAI Services, Bedrock, etc.)
- Knowledge of explainability tools (e.g., SHAP, LIME) and secure AI development frameworks.
- Understanding of data and AI regulatory and security frameworks and standards including, NIST, OWASP, EU AI, etc.
- Demonstrated ability to translate technical expertise into high value product outcomes.
Responsibilities
- Lead cutting-edge research on securing Generative AI, NLP, and LLMs to tackle challenges in unstructured data processing emphasising safety guardrails and compliance.
- Conduct advanced AI/ML security research and experimentation to develop novel algorithms and scalable solutions for secure and compliant GenAI systems.
- Develop and optimise models, tools and methodologies to detect and mitigate risks, including bias, toxicity, and hallucination in LLMs.
- Architect and deploy robust frameworks to govern AI systems, ensuring compliance with industry standards and regulatory mandates.
- Build end-to-end pipelines for training, evaluation, and deployment of GenAI systems, with a focus on operational excellence.
- Incorporate state-of-the-art safety measures, including adversarial testing, bias mitigation, and secure access controls.
- Serve as evangelist by publishing white papers, contributing to industry forums, and advising stakeholders on emerging GenAI trends.
- Lead collaborations with researchers, engineers, and product teams to integrate state-of-the-art AI models into enterprise-grade product solutions.
- Provide thought leadership in AI governance and security, mentoring cross-functional teams and stakeholders.
- Partner with legal, compliance, and ethics teams to ensure adherence to regulatory standards and ethical AI practices.
Preferred Qualifications
-
No preferred qualifications provided.