Posted in

Senior Software Engineer – Platform Engineering

Senior Software Engineer – Platform Engineering

CompanyWex
LocationBoston, MA, USA, San Francisco, CA, USA, Chicago, IL, USA, Portland, ME, USA
Salary$134000 – $178000
TypeFull-Time
DegreesBachelor’s, Master’s, PhD
Experience LevelSenior, Expert or higher

Requirements

  • Bachelor’s degree in Computer Science, Software Engineering, or a related field, OR equivalent deep understanding, experience, and capability.
  • A Master’s or PhD degree in Computer Science (or related field) and 5+ years of software engineering experience, or 7+ years of large-scale software engineering experience, with expertise in data system/platform development.
  • Strong problem-solving skills, with excellent communication and collaboration abilities.
  • Highly self-motivated and eager to learn, continuously adopting new technologies to improve productivity and the quality of deliverables.
  • Extensive experience designing simple, high-quality, performant, and efficient solutions for large, complex problems.
  • Strong understanding and hands-on experience with CI/CD automation.
  • Proven experience in combined engineering practices and Agile development.
  • Extensive experience and strong implementation skills in languages such as Java, C#, Golang, and Python, including coding, automated testing, performance measurement, and monitoring, with high productivity.
  • In-depth understanding of data processing techniques, such as data pipeline and platform development, SQL, and databases.
  • Extensive experience in data ingestion, cleaning, processing, enrichment, storage, serving, and quality assurance techniques and tools, including ELT, SQL, relational algebra, and databases.
  • Experience with cloud technologies, particularly AWS and Azure.
  • Good understanding of data warehousing, dimensional modeling, and related techniques.

Responsibilities

  • Collaborate with partners and stakeholders to understand customers’ business needs and key challenges.
  • Design, test, code, and instrument new data products, systems, platforms, and pipelines of high complexity, ensuring simple and high-quality solutions.
  • Leverage data effectively to measure, analyze, and drive decisions.
  • Develop and maintain CI/CD automation using tools such as GitHub Actions.
  • Implement Infrastructure as Code (IaC) using tools like Terraform, managing cloud-based data infrastructure.
  • Perform software development using TDD, BDD, Microservices, and event-oriented architectures with a focus on efficiency, reliability, quality, and scalability.
  • Support live data products, systems, and platforms, ensuring proactive monitoring, high data quality, rapid incident response, and continuous improvement.
  • Analyze data, systems, and processes independently to identify bottlenecks and opportunities for improvement.
  • Mentor peers and foster continuous learning of new technologies within the team and organization.
  • Attract top talent to the team; participate in interviews and provide timely, constructive feedback.
  • Act as a role model for team processes and best practices, ensuring assigned tasks solve customer and business problems effectively, reliably, and sustainably.
  • Collaborate with and lead peers in completing complex tasks and problem-solving efforts.
  • Lead Scrum teams with hands-on involvement in Agile practices, ensuring the timely and high-quality development of solutions.
  • Own large, complex components or systems, products, and platforms.
  • Lead and participate in technical discussions, driving high-quality and efficient system design.
  • Independently complete medium to large complexity tasks and proactively seek feedback from senior engineers to ensure quality.
  • Proactively identify and communicate project dependencies.
  • Review peer work and provide constructive feedback to enhance team collaboration and quality.
  • Build reliable, secure, high-quality, and scalable big data platforms and tools to support data transfer, ingestion, processing, serving, delivery, consumption, and data governance.
  • Design and implement scalable systems, platforms, pipelines, and tools for the end-to-end data lifecycle, including ingestion, cleaning, processing, enrichment, optimization, and serving, ensuring high-quality and easy-to-use data for both internal and external purposes.
  • Develop data quality measurement and monitoring systems, metadata management, data catalogs, and Master Data Management (MDM) solutions.
  • Use data modeling techniques to design and implement efficient and user-friendly data models and structures.
  • Become a subject matter expert in your functional area and best practices.
  • Apply creative problem-solving techniques to resolve issues or provide various approaches for unique situations.
  • Leverage data and AI technologies in design and development for high productivity and improved solution quality, influencing peers in these areas.
  • Lead team initiatives, applying your broad experience and technical knowledge to make informed decisions on solving complex issues.
  • Hold yourself and your team accountable for delivering high-quality results using defined OKRs.
  • Interact with senior managers to discuss plans, results, and provide advice on complex matters.

Preferred Qualifications

  • Lakehouse Platform Development: Experience in building and maintaining Lakehouse platforms and resources, ideally using Snowflake or equivalent data technologies.
  • Cloud Infrastructure Management: Extensive hands-on experience with cloud infrastructures in AWS and Azure, with a strong focus on managing infrastructure as code (IaC) using Terraform and implementing CI/CD pipelines using GitHub Actions.
  • API Expertise: Skilled in building, managing, and consuming APIs, with a strong understanding of API technologies and integration best practices.