Posted in

Forward Deployed Data Engineer – Ts/Sci

Forward Deployed Data Engineer – Ts/Sci

CompanyTRM Labs
LocationWashington, DC, USA
Salary$200000 – $260000
TypeFull-Time
DegreesBachelor’s
Experience LevelMid Level, Senior

Requirements

  • Bachelor’s degree (or equivalent) in Computer Science, Engineering, or a related field.
  • 4+ years of hands-on experience building and deploying data pipelines in Python.
  • Proven expertise with Apache Airflow (DAG development, scheduler tuning, custom operators).
  • Strong knowledge of Apache Spark (Spark SQL, DataFrames, performance tuning).
  • Deep SQL skills—able to optimize queries with window functions, CTEs, and large datasets.
  • Professional experience deploying cloud-native architectures on AWS, including services like S3, EMR, EKS, IAM, and Redshift.
  • Familiarity with secure cloud environments and experience implementing FedRAMP/FISMA controls.
  • Experience deploying applications and data workflows on Kubernetes, preferably EKS.
  • Infrastructure-as-Code proficiency with Terraform or CloudFormation.
  • Skilled in GitOps and CI/CD practices using Jenkins, GitLab CI, or similar tools.
  • Excellent verbal and written communication skills—able to interface confidently with both technical and non-technical stakeholders.
  • Willingness and ability to travel up to 25% to client sites as needed.
  • Active TS/SCI clearance required (Polygraph strongly preferred).

Responsibilities

  • Partner directly with mission-focused customers to design and deploy secure, scalable cloud-based data lakehouse solutions on AWS (e.g., S3, EMR/EKS, Iceberg or Delta Lake).
  • Own and deliver production-ready ETL/ELT pipelines using Python, Apache Airflow, Spark, and SQL—optimized for petabyte-scale workloads.
  • Containerize and deploy services on Kubernetes (EKS), using Terraform or CloudFormation for Infrastructure-as-Code and repeatable environments.
  • Design integrations that ingest data from message buses, APIs, and relational databases, embedding real-time analytics capabilities into client workflows.
  • Actively participate in all phases of the software development lifecycle: requirements gathering, architecture, implementation, testing, and secure deployment.
  • Implement observability solutions (e.g., Prometheus, Datadog, NewRelic) to uphold SLAs and drive continuous improvement.
  • Support mission-critical systems in production environments—resolving incidents alongside customer operations teams.

Preferred Qualifications

    No preferred qualifications provided.