Job Description:
10+ yrs Extensive working knowledge and expertise in Spark on Scala or PySpark and Hive.
Experience in Design, development and performance tuning in Spark.
Strong programming skills in Java or Scala or Python
Familiarity with big data processing tools and techniques.
Experience with the Hadoop ecosystem
Good understanding of distributed systems
Should have working knowledge in RDMS, Data warehouses and Unix Shell scripting.
Excellent analytical and problem-solving skills. Familiarity with tools like Airflow , Ctrl -M.
Excellent communication and collaboration skills