Posted in

Software Engineer – Serverless Compute Infrastructure

Software Engineer – Serverless Compute Infrastructure

CompanyByteDance
LocationSan Jose, CA, USA
Salary$Not Provided – $Not Provided
TypeFull-Time
DegreesBachelor’s, Master’s, PhD
Experience LevelJunior, Mid Level

Requirements

  • B.S./M.S, degree in Computer Science, Computer Engineering or a related area with 2+ years of relevant industry experience; new graduates with Ph.D. degree and strong publication records can be an exception.
  • Solid understanding of at least one of the following fields: Unix/Linux environments, distributed and parallel systems, high-performance networking systems, developing large scale software systems
  • Proven experience designing, architecting and building cloud and ML infrastructure related but not limited to resource management, allocation, job scheduling and monitoring.
  • Familiarity with container and orchestration technologies such as Docker and Kubernetes.
  • Proficiency in at least one major programming language such as Python, Go, C++, Rust, and Java.

Responsibilities

  • Build cutting edge application orchestration framework to host various types of production workloads, e.g., service management, big data jobs, distributed machine learning systems, and distributed storage services, edge computing and Public Cloud
  • Build complex container-based cluster management to manage our hyper-scale resources and workloads, with extremely high-performance, scalability, and resilience
  • Design and build a flexible, unified, and distributed resources/tasks scheduling framework to meet various new application requirements
  • Design and build cluster federation, scaling, and co-location solutions to optimize resource utilization in multi-cloud environments
  • Design, architect and implement next-gen Cloud-Native Infrastructure to enable cost-efficient, easy-to-use and secure ML platforms for latest ML workloads including but not limited to LLM training and inference that reduce time-to-market (TTM) for ByteDance customers.
  • Write high quality, product level code that is easy to maintain and test.
  • Keep up with the latest state-of-the-art in the open source and the research community in AI/ML, LLM and systems, and implementing and extending best practices.

Preferred Qualifications

  • Experience in one large scale cluster management systems, e.g., Kubernetes, Ray, Yarn, or Mesos
  • Experience in large scale resources and tasks scheduling development
  • Project experience in application scaling, workload co-location, and isolation enhancement
  • Experience in container runtime and relevant projects, e.g., Containerd, Kata-Container, gVisor, or x-containers
  • Experience working on GPUs/CUDA and/or LLM engines (e.g., vLLM, TensorRT-LLM).
  • Experience with a public cloud provider (AWS, Azure and GCP), and their ML services (e.g., AWS SageMaker, Azure ML, GCP Vertex AI).
  • Proficiency in ML systems (e.g., Ray, DeepSpeed), and deep learning frameworks (e.g., PyTorch).
  • Great communication skills and the ability to work well within a team and across engineering teams.
  • Passionate about system efficiency, quality, performance and scalability.