Posted in

Senior Data Engineer

Senior Data Engineer

CompanyLowe’s
LocationCharlotte, NC, USA
Salary$Not Provided – $Not Provided
TypeFull-Time
DegreesBachelor’s
Experience LevelSenior

Requirements

  • Bachelor’s Degree in Computer Science, CIS, Engineering, or a related field
  • 4 years of experience in any job title/occupation involving Data, BI, Platform Engineering, Data Warehousing/ETL, Software Engineering or a related field
  • 4 years of experience in Pyspark
  • 4 years of experience in Relational Databases (SQL)
  • 4 years of experience in ETL
  • 4 years of experience in Airflow
  • 4 years of experience in Apache Spark
  • 4 years of experience in Hadoop
  • 4 years of experience in Data Modelling
  • 4 years of experience in Data Orchestration
  • 3 years of experience in SDLC Practices
  • 3 years of experience in Jira
  • 2 years of experience in AWS
  • 2 years of experience in Snowflake
  • 2 years of experience in Machine Learning

Responsibilities

  • Translates complex cross-functional business requirements and functional specifications into logical program designs and data solutions
  • Partner with products team to understand business needs and functional specifications
  • Guides development teams in the design and build of complex Data or Platform solutions
  • Solving complex architecture, design and business problems
  • Providing mentorship and guidance to more junior level engineers
  • Ensuring timely feedback and direction on specific engineering tasks
  • Coordinate, Execute and participate in Component Integration (CIT) scenarios, System Integration Testing (SIT), and User Acceptance Testing (UAT) to identify application errors and to ensure quality software deployment
  • Continuously work with cross-functional development teams (Data Analysts and Software Engineers) for creating Pyspark jobs using Spark SQL and help them build reports on top of data pipelines
  • Scheduling oozie workflows for all the pipelines and to load data to hive and druid tables
  • Builds, tests and enhances data curation pipelines integration data from a wide variety of sources like DBMS, File systems and APIs for various OKRs and metrics development with high data quality and integrity
  • Executes the development, maintenance, and enhancements of data ingestion solutions of varying complexity levels across various data sources like DBMS, File systems (structured and unstructured), APIs and Streaming on on-prem and cloud infrastructure
  • Demonstrates strong acumen in Data Ingestion toolsets and nurtures and grows junior members in this capability
  • Executes the development, maintenance, and enhancements of BI solutions of varying complexity levels across different data sources like DBMS, File systems (structured and unstructured) on-prem and cloud infrastructure
  • Creates level metrics and other complex metrics; use custom groups, consolidations, drilling, and complex filters

Preferred Qualifications

    No preferred qualifications provided.