Posted in

Software Engineer III – Spark/Python/AWS

Software Engineer III – Spark/Python/AWS

CompanyJP Morgan Chase
LocationWilmington, DE, USA
Salary$Not Provided – $Not Provided
TypeFull-Time
DegreesBachelor’s
Experience LevelMid Level, Senior

Requirements

  • Formal training or certification on software engineering concepts and 3+ years of applied experience
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Demonstrated experience as a Data Engineer or in a similar role, with a focus on big data technologies
  • Design, develop, and optimize data pipelines using Spark and Python to process large volumes of data efficiently
  • Utilize AWS services (such as S3, EC2, Lambda, Athena, Kafka, EKS, SQS etc.) to build and manage cloud-based data infrastructure
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages
  • Overall knowledge of the Software Development Life Cycle
  • Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Strong problem-solving abilities and keen attention to detail

Responsibilities

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies
  • Implement ETL processes to extract, transform, and load data from various sources into our data warehouse
  • Monitor and optimize AWS resources to ensure cost-effective and high-performance data processing
  • Develop and maintain Bash scripts for automating data processing tasks and system maintenance
  • Implement automated testing and deployment processes to ensure reliability and efficiency
  • Analyze and optimize data processing performance to meet business needs and service level agreements (SLAs)
  • Troubleshoot and resolve issues promptly
  • Adds to team culture of diversity, equity, inclusion, and respect

Preferred Qualifications

  • Experience with other big data technologies such as Hadoop
  • AWS certification
  • Experience with data warehousing solutions and ETL tools