Job Description / Requirements
• Bachelor’s or master’s degree in computer science, data engineering, or a related field.
• Minimum 5 years of experience in data engineering, with expertise in AWS services, Databricks,
and/or Informatica IDMC.
• Proficiency in programming languages such as Python, Java, or Scala for building data pipelines.
• Evaluate potential technical solutions and make recommendations to resolve data issues especially
on performance assessment for complex data transformations and long running data processes.
• Strong knowledge of SQL and NoSQL databases.
• Familiarity with data modeling and schema design.
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration skills.
• AWS certifications (e.g., AWS Certified Data Analytics - Specialty, AWS Certified Data Analytics -
Specialty), Databricks certifications, and Informatica certifications are a plus.
Preferred Skills:
• Experience with big data technologies like Apache Spark and Hadoop on Databricks.
• Knowledge of containerization and orchestration tools like Docker and Kubernetes.
• Familiarity with data visualization tools like Tableau or Power BI.
• Understanding of DevOps principles for managing and deploying data pipelines.
• Experience with version control systems (e.g., Git) and CI/CD pipelines.
• Knowledge of data governance and data cataloguing tools, especially Informatica IDMC