Requirement
- Bachelors Degree in Computer Science, Information Technology or other relevant fields
- Minimum 2 years of work experience with Data Ingestion, ETL, Data Modelling, Data Architecture to build Datalake.
- Proficient in designing, coding, and tuning big data processes using Pyspark.
- Minimum 2 years of extensive experience in working on AWS platform using core services like AWS Athena, Glue Pyspark, RDS-PostgreSQL, S3 & Airflow (for orchestration).