Senior Data Scientist
Company | Rackner |
---|---|
Location | Washington, DC, USA |
Salary | $Not Provided – $Not Provided |
Type | Full-Time |
Degrees | Bachelor’s, Master’s |
Experience Level | Senior |
Requirements
- Bachelor’s or Master’s degree in Statistics, Mathematics, Computer Science, Data Science, or a related quantitative field
- 7–8 years of professional experience in data science or analytics, with leadership exposure
- 2–3 years of hands-on experience with LLMs (e.g., fine-tuning, prompt engineering, instruction tuning)
- Ability to obtain a Public Trust Clearance (required)
- Authorization to work in the United States
Responsibilities
- Architect and develop AI/ML models for analyzing regulatory documents
- Collaborate with FDA subject matter experts to validate models and ensure relevance for regulatory decision-making
- Implement data preprocessing and feature engineering pipelines for unstructured data
- Optimize model performance with a focus on accuracy, efficiency, and scalability
- Ensure compliance with FDA Good Machine Learning Practices (GMLP) and regulatory requirements
- Conduct predictive modeling, optimization, and continuous model monitoring
- Deliver client-facing presentations to executive stakeholders
- Identify new opportunities for innovation and strategic AI/ML initiatives
- Lead initiatives focused on LLM development, including fine-tuning, evaluation, and deployment strategies
Preferred Qualifications
- Strong proficiency in Python (preferred) and experience with other languages such as C, R, Java, or Scala
- Expertise in statistical modeling, machine learning, NLP, and deep learning techniques
- Familiarity with AWS services: Athena, S3, Glue, SageMaker, Comprehend, Bedrock
- Preferred: Exposure to MLOps practices, big data technologies (Hadoop, Spark), and cloud platforms
- PEFT (e.g., LoRA/QLoRA) for efficient fine-tuning
- Instruction fine-tuning, Retrieval-Augmented Generation (RAG), Chain-of-Thought (CoT) or Tree-of-Thought (ToT) prompting
- Quantization, pruning, and knowledge distillation techniques
- Experience with Hugging Face Transformers, LangChain, Llama Index, or large-scale training frameworks
- Familiarity with LLM evaluation metrics, model interpretability, and optimization best practices
- Exceptional written and verbal communication skills
- Strong problem-solving abilities and passion for continuous learning
- Collaborative, team-oriented mindset with the ability to partner with diverse stakeholders