Job Description
Design and build scalable data pipelines processing millions of events daily. Create robust ETL systems to power our analytics and ML infrastructure.
Requirements
- 3+ years of experience in data engineering
- Strong SQL and database design skills
- Experience with Python, Spark, Airflow
- Knowledge of cloud data services (AWS/Azure/GCP)
- Familiarity with data warehousing concepts
Responsibilities
- Build and maintain data pipelines for real-time and batch processing
- Design data warehouse schemas and optimize queries
- Implement data quality checks and monitoring systems
- Work with streaming data using Kafka or similar technologies
- Optimize database performance and data storage costs

