Skip to content

Senior Staff AI Research Engineer – On-Device Language Intelligence
Company | Samsung Research America |
---|
Location | Mountain View, CA, USA |
---|
Salary | $188400 – $282450 |
---|
Type | Full-Time |
---|
Degrees | PhD |
---|
Experience Level | Expert or higher |
---|
Requirements
- PhD in C.S., EE or related fields or equivalent combination of education, training, and experience
- 10+ years of research experience in the fields of AI/NLP/ML
- Experience conducting research and shipping user facing products
- Experience in large language model (LLM), including Transformer model architecture, attention mechanisms, decoder only LLMs, SSM architecture
- Foundational LLM training experience is a plus, including data curation, distributed training, and hyperparameter tuning
- Experience in making LLM-based solution deployable on-device with small latency and memory (e.g., knowledge distillation) and on-device acceleration
- NPU optimization is a plus
- Experience in LLM alignment, instruction tuning, LoRA, Adapter, etc.
- Expertise in multi-step reasoning, planning, reinforcement learning (including RLHF), etc.
- Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or similar
- Strong analytical and problem-solving skills, with a keen attention to detail
- Excellent written and verbal communication skills
- Demonstrated ability to work independently as well as collaboratively in a fast-paced research and development environment
- A strong product/commercialization deliverable experience is required
Responsibilities
- Conduct cutting-edge research and development of large foundation models (LLM, VLM, and Reasoning) for future, including model design, efficient model training, instruction tuning, prompt engineering, planning, action and related topics
- Collaborate with a multidisciplinary team of researchers, engineers, and domain experts to understand requirements, develop prototypes, and deliver robust solutions
- Conduct thorough evaluations and analysis of model performance, identify areas for improvement, and propose innovative solutions to enhance the overall quality and capabilities of large language models
- Generate creative solutions (patents), publish research results in top conferences (papers)
Preferred Qualifications
- Foundational LLM training experience is a plus, including data curation, distributed training, and hyperparameter tuning
- NPU optimization is a plus