Experience: 8+ Years
Role: Software Consultant
Skills: ETL, UNIX, SQL, Hadoop, Hive, Shell Scripting and Informatica.
Key Responsibilities:
· Data Warehousing concepts, Strong ETL Knowledge, Informatica PowerCenter installations /migrations /troubleshooting.
· Development & Testing of complex ETL logics, Cloud Data Integration in IICS, Administration in IICS, Installation of secure Agent in IICS, Informatica PowerCenter (Versions 9.x, 10.x), Informatica Administration, Talend Cloud Big data, HDFS, Understanding of Hadoop, Basics of Spark processing.
· Proficiency in Informatica PowerCenter 10.x or higher preferably in a Unix (Linux/AIX) environment with Oracle sources and targets.
· Must have experience in Oracle 19c or higher (SQL*Plus, PL/SQL)/Experience in Unix Shell Scripting (Linux/AIX)
· Experience Job scheduling tools like Control-M
· Good to have experience in Change Control Management tools & Visio
· Have experience on all stages of Software Development Life Cycle (SDLC)
· Able to do requirement gathering, impact analysis, ETL mapping, build, unit testing, support SIT/UAT/Production
· At least 3 years’ experience in Hadoop development
· Banking/finance domain experience will be an advantage.
· Integrate Multiple source system under Retail Banking to Data Lakes
· Bring the source system data into a standard format and loading into Hadoop.
· Reconciling the data provided and data load and generating reports of the data quality.
· Performing the Data quality on the data loaded into Data warehouse.
· Design and implement data models and databases for data storage in Data Lakes
· Achieved to work individually from Data Discovery to Production Implementation.
· Ontime deliver for Multiple projects in time within SLA.
· Monitor data loads and support the production incidents.
· Design, develop, and implement data processing pipelines to process large volumes of structured and unstructured data
· Should have good knowledge and working experience in Database and Hadoop (Hive, Impala, Kudu).
· Should have good knowledge and working experience in scripting using (Shell script, awk programming, quick automation to integrating any third-party tools), BMC monitoring tools
· Good understanding and knowledge in Data Modelling area using industry standard data model like (FSLDM)
· Collaborate with data engineers, data scientists, and other stakeholders to understand requirements and translate them into technical specifications and solutions
· It will be good to have experience in working with No SQL as well as virtualized Database Environment.
· Prior experience in developing banking application using ETL, Hadoop is mandatory. In depth knowledge.
· Experience in developing Hadoop
· Experience in data lake (integration of different data sources into the data lake)
· SQL Stored Procedures/Queries/Functions
· Unix Scripting
· Experience with distributed computing, parallel processing, and working with large datasets
· Familiarity with big data technologies such as Hadoop, Hive, and HDFS
· Job Scheduling in Control-M