Key responsibilities:
You will be responsible for the end-to-end software development and support for all work related to projects, quarterly change requests, L3 production fixes. This includes software product implementation and administration, application design, development, implementation, testing and support.
You will also be responsible for quality assurance of the team’s delivery in conformance with the Bank-defined software delivery methodology and tools. You will partner with other technology functions to help deliver required technology solutions.
Other responsibilities include:
- Create frameworks, technical features which helps in faster operationalisation of Data models, Analytical models(including AI/ML) and user generated contents (dashboards, reports etc)
- Effectively partner with data scientists in enabling faster adoption of AL/ML model based systems
- Independently install, customise and integrate software packages and programs
- Carry out POCs involving new data technologies
- Design and develop application frameworks for data integration
- Create technical documents such as solution design, program specifications for target solutions
- Perform design and development of applications which may not be limited to: Software Applications, Data Integration, User Interfaces, Automation
- Maintain and recommend software improvements to ensure a platform centric management of software applications
- Performance tuning
- Work with production support team members to conduct root cause analysis of issues, review new and existing code and/or perform unit testing
- Perform tasks as part of a cross functional development team using agile or other methodologies and utilising project management software
Key skills required:
Technical skillsets
TEAM Architecture (Big Data)
- 8+ years of experience with application development in Big Data Ecosystem (such as Cloudera, ElasticSearch, Spark, Presto, Hive, Impala, Kibana, Logstash, Hbase, Kafka),
- Expertise in building applications using Java, Scala, Python, Shell scripting
- Experience in large scale implementations and performance optimizations in the Big Data Ecosystem using Spark, Hive, Impala, Apache Kudu, Ozone, Presto, Tez, Iceberg etc
- Knowledge of frameworks such as Spring, Struts, Hibernate etc
- Knowledge of Object Oriented Programming, Multi-threading, Containerization, etc
- Knowledge of related technologies such as HTML, Javascript, CSS, React, Jquery, node.js, docker
- Knowledge of Elastic, Kibana, Logstash, Hive, Sqoop, MongoDB, Oozie, Flume, Kafka
- Expertise in integrating applications with Devops tools
- Knowledge of building applications on MPP appliances such as Teradata, Greenplum, Netezza is a plus
At least, 2 to 3 technical certifications in any of the below technologies:
1. Language – SQL, Java, Python, Scala, Javascript, node.js
2. Automation / scripting – CtrlM, Shell Scripting, Groovy
3. Cloudera Hadoop distribution – Hive, Impala, Spark, Kudo, Kafka, Flume
Additional Experience required for all teams to create an added advantage:
1. CI/CD software, Testing Tools - Jenkins, SonarQube
2. Version Control Tool - Aldon+LMe, CA Endeavor
3. Deployment Tool kit -Jenkins
4. Service or Incident Management (IcM) Tools - Remedy
5. Source Code Repository Tool - Bitbucket
6. Scheduling Tool - Control-M
7. Defect Management Tool - JIRA
8. Application Testing tool – QuerySurge
9. Cloud certification
10. Platforms provided by FICO, Experian, SAS for credit and portfolio management