Skip to content

AI Engineer – LLM-Based Content Moderation
Company | Trust Lab |
---|
Location | Palo Alto, CA, USA |
---|
Salary | $150000 – $200000 |
---|
Type | Full-Time |
---|
Degrees | Bachelor’s, Master’s |
---|
Experience Level | Mid Level, Senior |
---|
Requirements
- Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field.
- Experience in AI/ML, with a focus on NLP, deep learning, and LLMs.
- Proficiency in Python and deep learning frameworks such as TensorFlow, PyTorch, or JAX.
- Experience in fine-tuning and deploying transformer-based models like GPT, BERT, T5, or similar.
- Familiarity with evaluation metrics for classification tasks (e.g., F1-score, precision-recall curves) and best practices for handling imbalanced datasets.
Responsibilities
- Design, develop, and optimize AI models for content moderation, focusing on precision and recall improvements.
- Fine-tune LLMs for classification tasks related to abuse detection, leveraging supervised and reinforcement learning techniques.
- Optimize model performance through advanced techniques such as active learning, self-supervision, and domain adaptation.
- Deploy and monitor content moderation models in production, iterating based on real-world performance metrics and feedback loops.
Preferred Qualifications
- Experience working with large-scale, real-world content moderation datasets.
- Knowledge of regulatory frameworks related to content moderation (e.g., GDPR, DSA, Section 230).
- Familiarity with knowledge distillation and model compression techniques for efficient deployment.
- Experience with reinforcement learning (e.g., RLHF) for AI safety applications.