- Looking for candidates with a minimum of 10 years of overall experience.
- Hands-on experience with Hadoop clusters using major distributions such as IBM BigInsights, Hortonworks, and Cloudera.
- In-depth understanding of Spark architecture, including Spark Core, Spark SQL, Spark RDD, and DataFrames.
- Experience in transferring data from RDBMS to HDFS and Hive tables using Sqoop.
- Experience in the design, development, and implementation of ETL application development projects for transforming data to/from data warehouses and data marts.