Job Responsible
1) Design, build and maintain data pipelines and architecture in a centralized data lake environment.
2) Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
3) Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena
Job requirement:
A bachelor degree in a relevant field.
Hands-on experience with data lakes, ETL processes, data modeling, and cloud databases.
• Proficiency in programming languages like Python, Pyspark, R, SQL, or VBA.
• Experience with cloud platforms (AWS CLI, SDK API , Salesforce Service Cloud).
• Experience with automation tools (UiPath, Microsoft Power Automate, PowerApps).
Good communication skills.