Role: Developer
Job Level: More than 10 years of relevant experience (L4)
Job description
Data Framework Engineer
Expected Years of Experience: 8-10 years
SCOPE OF THE ROLE
You will be responsible for the end-to-end software development and support for all work related to projects, quarterly change requests, L3 production fixes.
This includes software product implementation and administration, application design, development, implementation, testing and support.
You will also be responsible for quality assurance of the team’s delivery in conformance with the Bank-defined software delivery methodology and tools.
You will partner with other technology functions to help deliver required technology solutions.
Other responsibilities include:
- Create frameworks, technical features which helps in faster operationalisation of Data models, Analytical models (including AI/ML) and user generated contents (dashboards, reports etc)
- Effectively partner with data scientists in enabling faster adoption of AL/ML model based systems
- Independently install, customise and integrate software packages and programs
- Carry out POCs involving new data technologies
- Design and develop application frameworks for data integration
- Create technical documents such as solution design, program specifications for target solutions
- Perform design and development of applications which may not be limited to: Software Applications, Data Integration, User Interfaces, Automation
- Maintain and recommend software improvements to ensure a platform centric management of software applications
- Performance tuning
- Work with production support team members to conduct root cause analysis of issues, review new and existing code and/or perform unit testing
- Perform tasks as part of a cross functional development team using agile or other methodologies and utilising project management software
Technical skillsets
TEAM Architecture (Big Data)
- 8+ years of experience with application development in Big Data Ecosystem (such as Cloudera, ElasticSearch, Spark, Presto, Hive, Impala, Kibana, Logstash, Hbase, Kafka),
- Expertise in building applications using Java, Scala, Python, Shell scripting, Go
- Experience in large scale implementations and performance optimizations in the Big Data Ecosystem using Spark, Hive, Impala, Apache Kudu, Ozone, Presto, Tez, Iceberg etc
- Knowledge of frameworks such as Spring, Struts, Hibernate etc
- Knowledge of Object Oriented Programming, Multi-threading, Containerization, etc
- Knowledge of related technologies such as HTML, Javascript, CSS, Jquery, node.js, docker
- Knowledge of Elastic, Kibana, Logstash, Hive, Sqoop, MongoDB, Oozie, Flume, Kafka
- Expertise in integrating applications with Devops tools
- Knowledge of building applications on MPP appliances such as Teradata, Greenplum, Netezza is a plus
At least, 2 to 3 technical certifications in any of the below technologies:
- Language – SQL, Java, Python, Scala, Javascript, node.js
- Automation / scripting – CtrlM, Shell Scripting, Groovy
- Cloudera Hadoop distribution – Hive, Impala, Spark, Kudo, Kafka, Flume
Additional Experience required for all teams to create an added advantage:
- CI/CD software, Testing Tools - Jenkins, SonarQube
- Version Control Tool - Aldon+LMe, CA Endeavor
- Deployment Tool kit -Jenkins
- Service or Incident Management (IcM) Tools - Remedy
- Source Code Repository Tool - Bitbucket
- Scheduling Tool - Control-M
- Defect Management Tool - JIRA
- Application Testing tool – QuerySurge
- Cloud certification
- Platforms provided by FICO, Experian, SAS for credit and portfolio management