Software Engineer
Company | Provi |
---|---|
Location | Chicago, IL, USA |
Salary | $105000 – $140000 |
Type | Full-Time |
Degrees | Bachelor’s |
Experience Level | Entry Level/New Grad, Junior |
Requirements
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Foundational experience with distributed systems, cloud-native development, or data engineering projects (internships, projects, or coursework are welcome).
- Interest in AI agent platforms such as Dify, LangChain, or similar frameworks.
- Proficiency in at least one programming language (Python, Java, or similar).
- Foundational knowledge of AWS services (e.g., S3, Lambda, Glue, Redshift, Kinesis, DynamoDB).
- Familiarity with event-driven architecture patterns and real-time data streaming.
- Exposure to infrastructure-as-code tools (e.g., Terraform, AWS CDK) is a plus.
- Experience with container technologies (Docker, ECS, EKS) is a plus.
- Interest or experience integrating LLMs or AI-driven workflows into backend systems is a strong plus.
- Effective communication skills with a proven ability to work cross-functionally with engineering, product, and data science teams.
- Enthusiasm for Agile methodologies, CI/CD pipelines, and modern testing strategies.
Responsibilities
- Design and implement scalable, cloud-native data pipelines and storage solutions using AWS services such as S3, Glue, Redshift, Lambda, and Kinesis.
- Build APIs and backend services that serve as the data and event backbone for AI agents orchestrated through frameworks like Dify.
- Develop serverless and containerized applications that support internal analytics, AI-driven workflows, and operational tools.
- Build and maintain internal data tools that streamline workflows and enhance operational efficiency across teams.
- Collaborate on the migration of legacy ETL processes and data stores to modern, event-driven, serverless architectures on AWS.
- Apply best practices to decommission legacy systems while ensuring data reliability, quality, and service continuity.
- Build integrations between cloud data services and AI agent frameworks like Dify, empowering AI agents to perform autonomous business tasks.
- Optimize data pipelines and real-time event feeds that supply AI agents with context and actionable information.
- Help design scalable workflows where agents interact with APIs, databases, and other cloud resources to automate decisions and processes.
- Monitor and optimize platform performance, addressing data and service bottlenecks to improve scalability and resilience.
- Ensure that all data services supporting the AI agent ecosystem are highly available, secure, cost-effective, and performant.
- Work closely with product, engineering, and data science teams to deliver robust data and AI-driven solutions.
- Partner with operations and support teams to ensure data platform and agent orchestration scalability and reliability.
- Participate in Agile workflows to deliver high-quality, iterative solutions.
Preferred Qualifications
- Exposure to infrastructure-as-code tools (e.g., Terraform, AWS CDK) is a plus.
- Experience with container technologies (Docker, ECS, EKS) is a plus.
- Interest or experience integrating LLMs or AI-driven workflows into backend systems is a strong plus.