Big Data Support Engineer
2 days ago
Job Description: Big Data Support Engineer – Batch Monitoring (Junior Role)
Experience Requirements:
1-3 years of IT experience.
Minimum 1-3 ye..
Job Description: Big Data Support Engineer – Batch Monitoring (Junior Role)
Experience Requirements:
- 1-3 years of IT experience.
- Minimum 1-3 years of hands-on experience with Hadoop and Kubernetes (K8s) ecosystems.
- At least 1-3 years of relevant experience in the banking/financial services industry.
Skills and Qualifications:
- Demonstrable analytical skills with in-depth knowledge of the Big Data ecosystem in a production support environment.
- Hands-on experience with Hadoop toolsets including Hive, Hue, Spark, Kafka, Kudu, Flink, Ozone, and Streaming applications.
- Strong knowledge of SQL, Python, Java, and Unix shell scripting.
- Incident, problem, and service outage management experience is a plus.
- Excellent communication and articulation skills.
Roles and Responsibilities:
- Batch Monitoring and Operational Support:
- Provide operational support to ensure the Service Level Indicators (SLI) of applications are met.
- Troubleshoot and resolve batch issues and identify root causes to minimize SLA impacts.
- Proactively identify opportunities for preventive maintenance to enhance system reliability.
- Performance Optimization:
- Perform administration and optimization to achieve optimal system performance.
- Conduct reviews of technical and functional solutions from enhancement/project teams to ensure alignment with established standards.
- Recommend design and operational optimizations for continuous improvement.
- Collaboration and Stakeholder Communication:
- Work closely with diverse teams, including tech infrastructure, application services, and business units, to ensure seamless operations.
- Provide timely updates on status, progress, issues, and resolutions to business stakeholders.
- Respond promptly to queries raised by various business units.
- Data Engineering and Pipeline Management:
- Build and optimize big data pipelines, architectures, and data sets.
- Support data transformation processes, metadatamanagement, dependency tracking, and workload management.
- Work on unstructured datasets and perform root cause analysis to identify opportunities for process improvement.
- Incident Management and Disaster Recovery:
- Participate in disaster recovery and business continuity exercises.
- Support software, platform, and database upgrades for systems.
- Kubernetes and Streaming Applications Support:
- Provide support for data APIs and data science/machine learning applications running on Kubernetes platforms.
- Troubleshoot core Java, Python, and Scala Spark batch, API, and streaming applications.
Preferred Experience:
- Experience with message queuing, stream processing, and scalable big data stores.
- Exposure to ledger balance reporting and GL datareconciliation between systems, datamarts, and reports.
- Knowledge of Teradata FSLDM and performance tuning on MPP data pipelines (e.g., Teradata).
This position is ideal for candidates with a strong foundation in BigData support and batch monitoring, excellent problem-solving skills, and the ability to collaborate effectively in dynamic environments, particularly within the banking/financial services industry
Official account of Jobstore.