x
Get our mobile app
Fast & easy access to Jobstore
Use App
Congratulations!
You just received a job recommendation!
check it out now
Browse Jobs
Companies
Campus Hiring
Download App
Jobs in Singapore   »   Jobs in Singapore   »   Information Technology Job   »   AVP, Big Data Analyst, Regional Consumer Banking Group Data Chapter, Group Transformation (ID: WD61652)
 banner picture 1  banner picture 2  banner picture 3

AVP, Big Data Analyst, Regional Consumer Banking Group Data Chapter, Group Transformation (ID: WD61652)

Dbs Bank Ltd.

Dbs Bank Ltd. company logo

Business Function

Here at the DBS Transformation Group, we focus on nurturing the culture of the “World’s Best Bank” (Euromoney 2018, 2019 and 2020). Our approach is a combination of both science and art. We immerse our stakeholders in the world of design thinking and experimentation, drive rigorous creativity along our innovation pipeline, and build connections between corporate entrepreneurs and start-ups. We are a cross-disciplinary team focused on the invention of solutions that will radically improve the way people live, work and play. We are passionate and committed to making banking joyful (while having lots of fun)!

Responsibilities

  • Build and manage reusable data assets to meet business requirements and support data initiatives for Consumer Banking
  • Good understanding of basic data categories/sub-categories corresponding to Consumer Banking such as transactions, lifestyle, demographics, digital activity, product holdings etc
  • Able to organize and optimize the databases to support and simplify for all key stakeholders
  • Support business to uncover insights through deep dive analysis and presenting recommendations via storytelling and visualization to non-analytics audience
  • Translate business needs into data and technical requirements for data pipelining and analytical solutioning
  • Capable of managing frequent evolving business requirements effectively (i.e. scope management)
  • Work closely with data scientists, data translators, data analysts and business stakeholders to define data requirement for any given business use case
  • Collaborate with Tech/Platform teams to effectively communicate the requirements and ensure the delivery of the scalable solution to support business

Requirements

  • B.Sc./M.Sc./PhD or equivalent degree in computational engineering/Economics/Finance or any relevant field is preferred.
  • 10 years of experience in analysis, design, development & maintenance of quality applications in an Big data process by using Hadoop concepts and framework.
  • At least 5 years hands on experience in Hadoop and working knowledge of Hive, Spark, Sqoop, database like Teradata/DB2/Oracle, ETL tools, SQL engine Presto, Collibra is must
  • Expertise in HDFS architecture, Hadoop framework - Map Reduce, Hive, PIG, Sqoop, Flume & data warehouse concepts is needed
  • Minimum of 5 years hands on experience on UNIX Shell Scripting and programming based on Python, PySpark, Spark is needed
  • Minimum of 2 years of experience in CI/CD pipelines (Jenkins, GitHub) is must
  • Knowledge of SAS, Ability to understand reports from QlikView, ThoughtSpot is needed
  • Good understanding of RDBMS principles, Shared nothing, MPP architecture.
  • Hands on experience in performance tuning of data jobs, query optimization, file and error-handling, restart mechanism is must
  • Good communication skills and should be able to collaborate effectively with team for deliverables

Apply Now

We offer a competitive salary and benefits package and the professional advantages of a dynamic environment that supports your development and recognises your achievements.

✱   This job post has expired   ✱

Sharing is Caring

Know others who would be interested in this job?