Job Description:
- 10 years of expertise in ETL, Data Warehousing, and Big Data technologies. The candidate should have a strong background in IBM InfoSphere DataStage, Oracle PL/SQL, Unix Scripting, and hands-on experience with Moody’s tools.
- The role requires in-depth knowledge of data extraction, transformation, and loading (ETL) processes, as well as experience with CI/CD and automation tools to support efficient data operations.
- This is an opportunity to contribute to large-scale data projects and work closely with cross-functional teams in a challenging and dynamic environment.
- Design, develop, and maintain ETL processes using IBM InfoSphere DataStage (versions 8.1, 8.7, and 11.3).
- Build and optimize parallel jobs to clean, transform, and load data into target databases.
- Implement robust solutions for data extraction and transformation from various sources including flat files and RDBMS.
- Configure and manage job scheduling and automation using tools like Control-M and Autosys.
- Develop shell scripts and use Unix commands to enhance ETL performance and streamline data processing.
- Ensure efficient job execution, error tracking, and troubleshoot performance issues.
- Use Moody’s tools (RCO, RAY, RFO) to support liquidity, risk management, and financial reporting projects.
- Troubleshoot and resolve issues related to GL recon breaks, ALM/LIQ stress processes, and LCR processes.
- Support Big Data solutions by working with core Spark (batch processing) and Spark SQL using Scala.
- Use Hadoop technologies like HDFS, Hive, and Sqoop for large-scale data migration and analysis.
- Key Skils: ETL, Moody’s, CI/CD, RCO, RAY, RFO, ALM, Spark, Scala.