Posted in

Senior Staff Data / Software Engineer

Senior Staff Data / Software Engineer

CompanyWex
LocationBoston, MA, USA, San Francisco, CA, USA, Chicago, IL, USA, Portland, ME, USA
Salary$158000 – $210000
TypeFull-Time
DegreesBachelor’s, Master’s, PhD
Experience LevelExpert or higher

Requirements

  • Bachelor’s degree in Computer Science, Software Engineering, or a related field, OR demonstrable equivalent deep understanding, experience, and capability. A Master’s or PhD degree in Computer Science (or related field) is a plus.
  • 10+ years of experience in large-scale software engineering.
  • Strong problem-solving skills, with excellent communication and collaboration abilities.
  • Highly self-motivated and eager to learn, consistently adopting new technologies to improve productivity and the quality of deliverables.
  • Extensive experience in architecture design, creating simple, high-quality, performant, and efficient solutions for large, complex problems.
  • Deep expertise in CI/CD automation.
  • Rich experience in combined engineering practices and Agile development, with a track record of leading teams to adopt these methods effectively.
  • Extensive experience and strong implementation skills in programming languages such as Java, C#, Golang, and Python, including coding, automated testing, measurement, and monitoring, ensuring high productivity.
  • Expertise in data processing techniques, including data pipeline/platform development, SQL, and database management.
  • Extensive experience in data ingestion, cleaning, processing, enrichment, storage, and serving, using tools such as ELT, SQL, relational algebra, and databases.
  • Experience with cloud technologies, including AWS and Azure.
  • Strong understanding of data warehousing and dimensional modeling techniques.
  • Understanding of data governance principles.

Responsibilities

  • Collaborate with partners and stakeholders to understand customers’ business challenges and key requirements.
  • Design, test, code, and instrument complex data products, systems, platforms, and pipelines. Ensure high-quality, scalable, reliable, secure, cost-effective, and user-friendly solutions.
  • Utilize data to drive decisions by effectively measuring and analyzing outcomes.
  • Develop and maintain CI/CD automation using tools like GitHub Actions.
  • Implement Infrastructure as Code (IaC) using tools like Terraform, including provisioning and managing cloud-based data infrastructure.
  • Apply software development methodologies such as TDD, BDD, and Microservice/Event-Oriented Architectures to ensure efficiency, reliability, quality, and scalability.
  • Support live data products, systems, and platforms by promoting proactive monitoring, ensuring high data quality, rapid incident response, and continuous improvement.
  • Analyze data, systems, and processes independently to identify bottlenecks and opportunities for improvement. Lead complex problem diagnostics and drive timely resolutions.
  • Mentor peers and foster continuous learning of new technologies within the team and the broader organization, consistently upholding high technical standards.
  • Attract top industry talent; contribute to interviews and provide timely, high-quality feedback.
  • Serve as a role model by adhering to team processes and best practices, ensuring your solutions effectively solve customer and business problems in a reliable and sustainable way.
  • Collaborate with or lead peers in completing complex tasks, ensuring timely and effective execution.
  • Lead a Scrum team with hands-on involvement, ensuring high-quality and timely development and delivery aligned with agile best practices.
  • Own large, complex systems, platforms, and products, driving future developments and ensuring they deliver measurable business value.
  • Lead and actively participate in technical discussions, ensuring the team stays at the forefront of industry advancements.
  • Design and build high-performance, reliable systems with attention to detail and craftsmanship.
  • Complete large, complex tasks independently, seeking feedback from senior peers to maintain high quality.
  • Proactively identify and communicate project dependencies.
  • Review peer work, providing constructive feedback to promote continuous improvement.
  • Build scalable, secure, and high-quality big data platforms and tools to support data transfer, ingestion, processing, serving, delivery, and data governance needs.
  • Design and build efficient systems, platforms, pipelines, and tools for the entire data lifecycle, including ingestion, cleaning, processing, enrichment, optimization, and serving, leveraging the data platform. Develop systems for high-quality, user-friendly data delivery for internal and external use.
  • Develop data quality measurement and monitoring techniques, metadata management, data catalogs, and Master Data Management (MDM) systems.
  • Use data modeling techniques to design and implement efficient, easy-to-use data models and structures.
  • Become a deep subject matter expert in your functional area, applying best practices.
  • Apply creative problem-solving techniques to assess unique circumstances and suggest or implement solutions.
  • Leverage data and AI technologies to enhance productivity and solution quality, influencing peers to adopt these practices.
  • Lead team initiatives by applying your extensive experience and technical expertise to drive decisions on methods and approaches to complex issues.
  • Hold yourself and your team accountable for delivering high-quality results aligned with defined OKRs (Objectives and Key Results).
  • Provide strategic advice to senior leadership on highly complex situations, leading teams through initiatives that achieve excellent results.
  • Offer thought leadership on business initiatives by applying deep technical and industry expertise.

Preferred Qualifications

  • Proven expertise in designing and implementing scalable, reliable, and cost-effective data architectures, including data lakes, lake houses, and data warehouses, to support analytics, real-time processing, and AI/ML applications.
  • Extensive experience building and optimizing high-throughput data ingestion frameworks for diverse data types (structured and unstructured) using tools like Kafka, Spark, AWS Glue, and Apache NiFi, with strong ETL/ELT proficiency.
  • Hands-on experience with AWS, Azure, or GCP managed services for data storage, compute, and orchestration, along with Infrastructure as Code (IaC) for scalable provisioning.
  • Expertise in efficient data modeling and schema design for analytical and transactional data, focusing on optimal data retrieval and storage practices.
  • Deep knowledge of event-driven and streaming architectures to enable real-time data processing and responsive data products.