Responsibilities:
- Design and develop data ingestion solutions for big data.
Build efficient and reliable data processing solutions. - Design and implement data storage solutions.
- Develop scalable data pipelines for ingestion, transformation, and storage of large datasets.
- Optimize data pipelines for real-time and batch processing.
- Ensure data quality and integrity throughout the data pipeline by implementing effective data validation and monitoring strategies.
Requirements:
- Minimum 5-8 years of designing and implementing ETL solutions.
- Bachelor's degree or higher in Computer Science, Engineering, or a related field.
- Familiar with AWS data ingestion and processing tools like NiFi, Kinesis, and Glue.
- Strong expertise in big data technologies such as Apache Spark and Hadoop.
- Experience with AWS data storage solutions including S3, Iceberg, Aurora, and OpenSearch.
- Proficiency in programming languages including Python, Scala, and Java.
- Preferred certification and/or hands-on experience with AWS data services.
- Attention to detail and a strong commitment to delivering high-quality solutions.
- Strong problem-solving skills and the ability to work effectively in a fast-paced environment.
- Work well in a team.
- Excellent communication and interpersonal skills