Skip to content

Senior Software Engineer
Company | Adobe |
---|
Location | San Jose, CA, USA |
---|
Salary | $170500 – $320000 |
---|
Type | Full-Time |
---|
Degrees | Master’s |
---|
Experience Level | Expert or higher |
---|
Requirements
- M.S. in Computer Science or a related field or equivalent experiences required
- Experience with Distributed processing systems like Apache Spark, Hadoop Stack, or Apache Kafka
- Experience with Data Lake cloud storages like Azure Data Lake Storage or AWS (Amazon Web Services) S3
- Strong programming skills with extensive experience in Java or Scala
- Leadership skills to collaborate and drive cross-team efforts
- Excellent communication skills
- Adaptable to evolving priorities, accepting challenges outside one’s comfort zone, learning new technologies, and delivering viable solutions within defined time boundaries
- Ability to think through solutions from a short term and long-term lens in an iterative development cycle.
Responsibilities
- Collaborate with a team of engineers & product managers in building high-performance data ingestion pipelines and data store to serve the use cases of Segmentation and Activation
- Own responsibility for design and implementation of key components of ingesting and maintaining petabyte of Profile data
- Develop systems to support high volume data ingestion pipelines handling both streaming and batch processing
- Leverage popular file and table formats to design storage models to support the required ingestion volumes and data access patterns
- Explore tradeoffs across different formats and schema layouts driven by workload and application characteristics
- Deploy production services and iteratively improve them based on customer feedback
- Follow Agile methodologies using industry leading CI/CD pipelines
- Participate in architecture, design & code reviews
Preferred Qualifications
- Understanding of file formats like Apache Parquet and table formats such as Databricks Delta, Apache Iceberg or Apache Hudi is preferred
- Understanding of NoSQL databases like Apache HBase, Cassandra, Mongo, or Azure Cosmos DB is a plus
- Practical experience in building resilient data pipelines at scale is preferred