Responsibilities
• Design and develop data ingestion solutions for big data.
• Build efficient and reliable data processing solutions.
• Design and implement data storage solutions.
• Develop scalable data pipelines for ingestion, transformation, and storage of large datasets.
• Optimize data pipelines for real-time and batch processing.
• Ensure data quality and integrity throughout the data pipeline by implementing effective data validation and monitoring strategies.
Requirements
• Bachelor's degree or higher in Computer Science Engineering, or a related field.
• Minimum 1-2 years of designing and implementing ETL solutions
• Familiar with AWS data ingestion and processing tools like FluentBit, Kinesis, and Glue
• Strong expertise in big data technologies such as Apache Spark.
• Experience with AWS data storage solutions including S3, Redshift, Iceberg, Aurora.
• Proficiency in programming languages including Python, Scala, and Java.
• Preferred certification and/or hands-on experience with AWS data services.
Licence No: 12C6060