Software Engineer – Machine Learning Infrastructure
Company | DoorDash |
---|---|
Location | Seattle, WA, USA, San Francisco, CA, USA, Sunnyvale, CA, USA |
Salary | $130600 – $285000 |
Type | Full-Time |
Degrees | Bachelor’s, Master’s, PhD |
Experience Level | Senior |
Requirements
- B.S., M.S., or PhD. in Computer Science or equivalent
- Exceptionally strong knowledge of CS fundamental concepts and OOP languages
- 6+ years of industry experience in software engineering
- Prior experience building machine learning systems in production such as enabling data analytics at scale
- Prior experience in machine learning – you’ve developed and deployed your own models – even if these are simple proof of concepts
- Systems Engineering – you’ve built meaningful pieces of infrastructure in a cloud computing environment. Bonus if those were data processing systems or distributed systems
Responsibilities
- Build a world-class ML platform where models are developed, trained, and deployed seamlessly
- Work closely with Data Scientists and Product Engineers to evolve the ML platform as per their use cases
- Help build high performance and flexible pipelines that can rapidly evolve to handle new technologies, techniques and modeling approaches
- Work on infrastructure designs and solutions to store trillions of feature values and power hundreds of billions of predictions a day
- Help design and drive directions for the centralized machine learning platform that powers all of DoorDash’s business
- Improve the reliability, scalability, and observability of our training and inference infrastructure.
Preferred Qualifications
- Experience with challenges in real-time computing
- Experience with large scale distributed systems, data processing pipelines and machine learning training and serving infrastructure
- Familiar with Pandas and Python machine learning libraries and deep learning frameworks such as PyTorch and TensorFlow
- Familiar with Spark, MLLib, Databricks, MLFlow, Apache Airflow, Dagster and similar related technologies
- Familiar with large language models like GPT, LLAMA, BERT, or Transformer-based architectures
- Familiar with a cloud based environment such as AWS