Posted in

Staff Engineer – Data Engineering & Analytics

Staff Engineer – Data Engineering & Analytics

CompanyGeico
LocationSeattle, WA, USA, Washington, DC, USA, San Francisco, CA, USA, Austin, TX, USA, Los Angeles, CA, USA, Raleigh, NC, USA, Pittsburgh, PA, USA, Portland, OR, USA, Boulder, CO, USA, Bethesda, MD, USA
Salary$105000 – $230000
TypeFull-Time
Degrees
Experience LevelSenior, Expert or higher

Requirements

  • Advanced programming experience and big data experience within Python, SQL, DBT, Spark, Kafka, Git, Containerization (Docker and Kubernetes)
  • Advanced experience with Data Warehouses, OLAP, dimensional modeling and analytics
  • Demonstrable knowledge of business intelligence tools (strong preference for PowerBI) and/or ETL tools (strong preference for SSIS or DBT)
  • Experience with Apache Iceberg for managing large-scale tabular data in data lakes is a plus
  • Familiarity with understanding SQL concepts (such as MPP-based, No-SQL, Cloud WH)
  • Experience architecting and designing new ETL and BI systems
  • Experience with supporting existing ETL and BI systems
  • Experience with scripting languages such as Python is preferred
  • Experience with enterprise orchestration tools such as Airflow is preferred
  • Ability to balance the competing needs of multiple priorities and excel in a dynamic environment
  • Advanced understanding of DevOps concepts including Azure DevOps framework and tools
  • Knowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication)
  • Advanced understanding of monitoring concepts and tooling
  • Strong problem-solving ability
  • Experience with Marketing, Product, Sales, Service, Customer, Associate, Billing, Agency, Claims, or Telematics data is preferred
  • Experience with front end development using REACT/Javascript is a plus

Responsibilities

  • Scope, design, and build scalable, resilient distributed systems
  • Utilize programming languages like Python, SQL, NoSQL, DBT along with Apache Spark for data processing, container orchestration services such as Docker and Kubernetes, and various Azure tools and services
  • SQL Server Integration Services as well as reporting tools like PowerBI and Apache Superset to transform and report on large volumes of enterprise data to gain new insights
  • Utilize your passion for data exploration to produce high quality, accurate data and/or reports with visualizations to empower outstanding business decisions
  • Responsible for technical aspects of a project at the team level
  • Lead in design sessions and code reviews with peers to elevate the quality of engineering across the organization
  • Spearhead new feature use (innovate within existing tooling)
  • Spearhead new software acquisition and use (innovate with new tooling)
  • Leverage automation to remove redundant error prone tasks to improve the quality of the solution
  • Build with engineering excellence and leverage your technical skills to drive towards the best solutions
  • Engage in cross-functional collaboration throughout the entire software lifecycle
  • Define, create, and support reusable application components/patterns from a business and technology perspective
  • Mentor other engineers
  • Consistently share best practices and improve processes within and across teams

Preferred Qualifications

  • Experience with Microsoft Fabric/Azure Data Factory preferred
  • Experience with front end development using REACT/Javascript is a plus