Key Responsibilities:
Defines, develops or modifies application modules or enterprise-wide software systems
using disciplined software development processes and makes modules production ready by
moving them to libraries, completing forms, following procedures, completing version
control documents
Develop Hadoop and Web-Service based applications and components
Interface with Users, BAs, and Developers to walk through data design
Define comprehensive unit test coverage of data transformations
Demonstrating analytical technical skills
Should have basic knowledge in BSS data model specially CRM, OMS, Billing
Be part of a team of Hadoop developers working on Big Data project with ample opportunity
to grow professionally, functionally and technically
Work with other groups to ensure smooth delivery with high standards and quality
Understand requirements, propose and deliver superior solutions
Write well-structured beautiful clean code.
Key Requirements:
Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume)
Experience with Java or Big Data Technologies a plus
Any big data experience with Hadoop, HDFS, cluster management
6 years' experience in designing & developing enterprise application solution for distributed
systems
Experience with Hive, Pig & Map Reduce and Hadoop ECO system Framework
Experience with HBase, Talend, NoSQL databases-Experience Apache Spark or other
streaming big data processing preferred.
3.5 years of exclusive experience in Hadoop and its components like HDFS, Apache Pig, Hive,
Sqoop, Spark, Hbase, NIFI, Cassandra
Extensive Experience in Setting Hadoop Cluster.
Good working knowledge with Map Reduce and Apache Pig
Involved in Design and Development of technical specifications using Hadoop technology
Involved in writing the Pig scripts to reduce the job execution time
Involved in developing the Hive Reports.
Very well experienced in designing and developing both server side and client-side
applications.