Job Description
- Design, develop, and implement data processing pipelines to process large volumes of structured and unstructured data
- Should have good knowledge and working experience in ODI, OBIEE, Hadoop
- Good understanding and knowledge in Data Modelling area using industry standard data model like (FSLDM)
- Collaborate with data engineers, data scientists, and other stakeholders to understand requirements and translate them into technical specifications and solutions
- Develop and maintain scalable and fault-tolerant Spark applications, adhering to industry best practices and coding standards
- Troubleshoot and resolve issues related to data processing, performance, and data quality in the Spark-Elasticsearch integration
- Monitor and analyze job performance metrics, identify bottlenecks, and propose optimizations in both Spark and Elasticsearch components
- In depth knowledge of technology stack at global banks is mandatory.
- Flexibility to stretch and take challenges
- Communication & Interpersonal skills
- Attitude to learn and execute
Job Requirement
- At least 10 years experience managing data analysis and decision making support
- Develop and implement a strategy to provide compelling capabilities that helps the business to succeed in the decided business goals.
- Responsible for the solution's overall development life cycle and managing complex projects with significant bottom-line impact.
- Identify outliers, gaps, and inaccuracies, driving the business process and workflow mapping/analysis using data capture and modelling technologies, methods, and tools.
- Experience in software designing, development, support using SQL, PLSQL, ODI, OBIEE, Hadoop, Spark, Python, and Unix Shell script.
- Experience in Murex Datamart
- Experience in implementation of Hadoop framework