Research Scientist in LLM Foundation Models – Reasoning – Planning & agent
Company | ByteDance |
---|---|
Location | Seattle, WA, USA |
Salary | $Not Provided – $Not Provided |
Type | Full-Time |
Degrees | |
Experience Level | Senior, Expert or higher |
Requirements
- Proficiency in research experience with RL, LLM, CV and familiarity with large-scale model training
- Proficiency in data structures, and fundamental algorithm skills, fluency in C/C++ or Python
- Excellent problem analysis and solving skills, able to deeply solve problems in large-scale model training and application
- Good communication and collaboration skills, able to explore new technologies with the team and promote technological progress
Responsibilities
- Enhance reasoning and planning throughout the entire development process, encompassing data acquisition, model evaluation, pretraining, SFT, reward modeling, and reinforcement learning
- Synthesize large-scale, high-quality (multi-modal) data through methods such as rewriting, augmentation, and generation
- Solve complex tasks via system 2 thinking, leverage advanced decoding strategies such as MCTS, A*
- Investigate and implement robust evaluation methodologies to assess model performance at various stages
- Teach foundation models to use tools, interact with APIs and code interpreters, and build agents and multi-agents to solve complex tasks
Preferred Qualifications
- Experience with influential projects or papers in RL, NLP, deep learning
- Winners of competitions such as ACM/ICPC, IOI, TopCoder etc.