POSITION GENERAL DUTIES AND TASKS :
Skillset
- Up to 9 years of IT experience & 5+ Years on Hadoop and K8s Eco Systems
- Last 3 years of relevant experience in banking/financial services industry
- Demonstrable analytical skills with deep knowledge of Big Data Ecosystem in a Production Support Environment.
- Hands On Experience within the Hadoop Tool Sets Hive, Hue, Spark, Kafka, Kudu, Flink, Ozone, Kafka and Streaming.
- Strong SQL, Python, Java and Unix shell scripting
- Incident, problem, and service Outage management experience is Plus.
- Good Communication and Articulation Skills
Roles/Responsibilities
- Strong in data engineering concepts and programming
- Experience in building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Experience in supporting data transformation, data structures, metadata, dependency and workload management.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Provide operational support and ensure SLI of Applications
- Troubleshoot Batch Issues & root causes to Mitigate Impacts to SLA
- Proactively Identify and Identify opportunities for preventive Maintenance
- Perform Administration & Optimization to achieve of Optimal System Performance
- Conduct review of technical and functional solutions from enhancement/project team to ensure they conform to Established Standards
- Recommend Design and Operational Optimizations
- Respond on time to the queries raised by various business units
- Provide Timely updates (Status, Progress, Issues, Resolutions) to Business Stakeholders
- Work with Project/Enhancement Teams to Review, Transition for Run Team
- Participate in Disaster Recovery and Business Continuity exercises
- Support relevant software/platform/database upgrade for the systems
- Knowledge of Troubleshooting Core Java, Python, Scala spark batch, API and streaming applications
- Support data APIs, data science ML applications running on Kubernetes platform.
Datawarehouse Support Engineer (EDW)
The support specialist will be a techno functional support team member for the Enterprise Datawarehouse (EDW) under IT BAU team.
He/she is responsible for ensuring system availability and timely deliverables for datamarts for various business unit like Finance, regulatory, Campaign.
The support specialist, as the analyst expert within the IT team, will be the support staff who needs to closely co-ordinate and work with various team like Tech Infra, application services, solutioning, project and business units in the bank. He/she follows up to resolve issues reported by users and performs impact analysis related projects/enhancements that will impact the EDW.
Roles/Responsibilities
- Responsible to provide BAU support for Enterprise Datawarehouse
- Responsible to provide operational support and ensure SLI of system availability, batch deliverables
- Respond on time to the queries raised by various business units
- Provide timely updates regarding production status and progress of issue resolution
- Fine tune applications and systems for high performance and higher volume throughput
- Conduct review of technical and functional solutions from enhancement/project team to ensure they conform to IT standards
- Perform impact analysis of enhancements/projects that will impact supported systems
- Identify the underlying problems, analyse the root causes, provide possible solutions/fixes, and respond to users within SLA
- Support relevant software/platform/database upgrade for the systems
- Work with delivery team for smooth transition from projects/enhancements to BAU Mode
- Incident, problem, and service outage management experience
- Participate in Disaster Recovery and Business Continuity exercises
- Able to support relevant software/platform/database upgrade for the systems
- Server Performance, capacity, utilization monitoring and Health monitoring
- Collaborate with diverse cross functional teams within IT and Business to Independently drive outcomes
- At least 3+ years of relevant experience in banking/financial services industry
- Experience with Teradata FSLDM and data mapping
- Hands-on technologist, strong in SQL, UNIX and Teradata tools and technologies
- Experience in designing high throughput fault tolerant data pipelines for Batch (ETL/ELT)
- Experience in GL Data reconciliation between systems, datamarts & report and Knowledge of the ledger balance reporting is an added advantage
- Experience in Performance Tuning on Teradata or other MPP data pipeline
- Hands On Experience within the Hadoop Tool Sets Hive, Hue, Spark, Kafka is a plus