Skip to content

Data Engineer
Company | Barr |
---|
Location | Minneapolis, MN, USA |
---|
Salary | $98000 – $115000 |
---|
Type | Full-Time |
---|
Degrees | Bachelor’s |
---|
Experience Level | Mid Level, Senior |
---|
Requirements
- Bachelor’s degree in computer science, data engineering, information systems, or a related field or equivalent experience.
- Five or more years of relevant experience in data engineering or ETL development, building and managing data pipelines in a production environment.
- Knowledge of SQL relational databases (especially SQL Server), including writing complex SQL queries and change data capture ETL practices.
- Experience developing and maintaining ETL processes using SQL Server Integration Services (SSIS).
- Experience defining or contributing to data governance frameworks, including data quality, access management, and metadata practices.
- Proficiency in a scripting or programming language (e.g., Python, .Net, or PowerShell).
- Strong interpersonal, oral, and written communication skills and the ability to communicate effectively with a variety of different people.
- Must be legally authorized to work in the United States without the need for sponsorship by Barr, now or in the future.
Responsibilities
- Troubleshoot, maintain, and monitor ETL processes built with SQL Server Integration Services (SSIS), Python, or other processes.
- Review existing ETL patterns to identify areas for optimization, make recommendations, and contribute to the redesign of integration workflows.
- Improve traceability by building audit and logging capabilities into ETL pipelines.
- Partner with teams to strengthen observability by improving alerting, error handling, and operational dashboards that keep ETL health visible and actionable.
- Architect and support robust API and point-to-point integrations with internal and third-party systems—bridging data silos and unlocking smoother automation across software tools.
- Contribute and manage structured change management and version control practices using Azure DevOps to support consistent development and deployment of ETL workflows.
- Help set the tone for data engineering excellence by guiding best practices in pipeline structure, metadata usage, and data quality—helping ensure each workflow is built to scale and evolve.
Preferred Qualifications
- Familiarity with NoSQL databases such as MongoDB, Redis, or DynamoDB.
- Exposure to vector databases (e.g., Pinecone, Weaviate, Qdrant, Milvus).
- Awareness of data patterns in AI/ML systems, including prompt logging, model output storage, and managing metadata associated with generative AI workflows.
- Basic knowledge of graph databases (e.g., Neo4j, Amazon Neptune).