Experience: 9+ Years
Role: Big Data Engineer
Skills from Primary Skill Cluster Spark, Big Data, Hive, Java, 5+yrs with hands-on on Java is Must
Database knowledge and should be able to write the queries is Must.
Handson on Unix Shell scripting is Must.
Knowledge in Spark SQL (Good to have).
Understand on Regulatory reporting(Good to have).
Design implementation of scalable fault tolerant ETL Pipelines on Bigdata Platform to store process terabytes of contract informations from upstream sources with high availability
Actively work on performance tuning techniques through understanding Spark DAGs on data structures on both Relational Big Data Platforms giving high performance to both ETL Reporting Components
Responsibilities: Performs adhoc data research analysis, provide written summaries of results for nontechnical business users