Lead Data Engineer
Company | Las Vegas Sands Corp |
---|---|
Location | Dallas, TX, USA |
Salary | $Not Provided – $Not Provided |
Type | Full-Time |
Degrees | Bachelor’s, Master’s |
Experience Level | Senior, Expert or higher |
Requirements
- At least 21 years of age.
- Proof of authorization to work in the United States.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
- Must be able to obtain and maintain any certification or license, as required by law or policy.
- 5+ years of experience in data engineering, with at least 2 years in a lead or senior role, preferably in the gaming or casino industry.
- Hands-on experience with on-premise Data Lake data pipelines, including storage component (HDFS, Cassandra), compute component (Spark), message component (Kafka).
- Experience with on-premise and Data Lakehouse technologies (Iceberg, Dremio, Flink, AWS Lake Formation, AWS Glue, AWS Athena, Azure Databricks-Unity Catalog, Azure Synapse, Microsoft Fabric Lakehouse).
- Proficiency with data pipeline technologies (e.g., Apache Kafka, Apache Airflow, Apache NiFi) for orchestrating data workflows.
- Demonstrated experience with ETL frameworks and tools (e.g., Talend, Informatica, AWS Glue) for data integration and processing.
- Strong knowledge of relational and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) and experience with data warehousing concepts.
- Familiarity with cloud data solutions (e.g., AWS, Azure, Google Cloud) and their associated data services (e.g., AWS Redshift, Google BigQuery).
- Proficiency in programming languages commonly used for data engineering (e.g., Python, Java, Scala) for building data pipelines and processing workflows.
- Understanding of data modeling techniques (e.g., star schema, snowflake schema) to support analytics and reporting needs.
- Demonstrated experience with data quality tools and frameworks to ensure data integrity and compliance.
- Knowledge of continuous integration/continuous deployment (CI/CD) practices and tools (e.g., Jenkins, GitLab CI) for automating data pipeline deployment.
- Strong analytical and problem-solving skills with a focus on delivering high-quality data solutions.
- Proven ability to lead and mentor junior data engineers, fostering a culture of knowledge sharing and continuous improvement.
- Strong interpersonal skills with the ability to communicate effectively and interact appropriately with management, other Team Members and outside contacts of different backgrounds and levels of experience.
Responsibilities
- Design, develop, and maintain robust data pipelines that support data ingestion, transformation, and storage, ensuring high data quality and reliability.
- Lead the integration of diverse data sources (e.g., transactional systems, third-party APIs, IoT devices) to create a unified data ecosystem for the casino management system.
- Implement and optimize Extract, Transform, Load (ETL) processes to ensure efficient data movement and processing for both batch and real-time analytics.
- Collaborate with the Principal Data Architect to establish the overall data architecture, ensuring it meets business needs and supports future scalability.
- Develop and implement data quality checks and monitoring processes to ensure accuracy and consistency across all data sources.
- Work closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions that enable data-driven decision-making.
- Monitor and optimize data pipeline performance, identifying bottlenecks and implementing improvements to enhance data processing speeds.
- Maintain comprehensive documentation of data engineering processes, architecture, and workflows to support ongoing development and maintenance.
- Perform job duties in a safe manner.
- Attend work as scheduled on a consistent and regular basis.
- Perform other related duties as assigned.
Preferred Qualifications
-
No preferred qualifications provided.