Data Engineer
Company | Ryan Specialty |
---|---|
Location | Nashville, TN, USA, Chicago, IL, USA |
Salary | $78050 – $90000 |
Type | Full-Time |
Degrees | Bachelor’s |
Experience Level | Mid Level, Senior |
Requirements
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field, or equivalent experience.
- 3+ years of experience in data engineering or related field.
- Strong hands-on experience with Databricks, Spark (PySpark or Scala), and distributed data processing.
- Proficiency in SQL, data modeling, and performance tuning.
- Experience working with cloud platforms (e.g., Azure, AWS, GCP) and cloud-native data services.
- Solid understanding of ETL best practices and data pipeline architecture.
- Strong problem-solving skills and ability to work independently and collaboratively.
Responsibilities
- Design, develop, and optimize scalable ETL pipelines using Databricks (Spark / PySpark).
- Integrate data from various structured and unstructured sources into our data lake and warehouse environments.
- Implement data quality checks, monitoring, and validation processes.
- Collaborate with Data Scientists, Analysts, and other Engineers to understand data needs and deliver solutions.
- Tune data jobs for performance, scalability, and reliability.
- Develop and maintain documentation around data systems, processes, and best practices.
- Assist in migrating legacy ETL jobs to modern cloud-native solutions (e.g., Databricks, Delta Lake).
Preferred Qualifications
- Experience with Delta Lake and structured streaming.
- Knowledge of CI/CD processes and DevOps practices for data engineering.
- Familiarity with other ETL tools like Informatica, Apache Airflow, or dbt.
- Experience with Snowflake, Redshift, or other cloud data warehouses.
- Python programming skills beyond Spark (e.g., scripting, automation).