This role involved in the implementation of project in Snowflake and setting up of portfolio entitlement.
Responsibilities:
- Work closely with data product team and business end-users to implement and support data platforms using best-of-breed technology and methodology.
- Design and build robust and scalable data ingestion and data management solutions for batch-loading and streaming data sources.
- Enable ingestion checks and data quality checks for all data sets in the data platform and ensure the data issues are actively detected, tracked, and fixed without breaching SLA.
- Work with members in the team to establish best practice and internal processes to enhance the data pipeline operations.
- Coach junior team members through architecture design review and code review.
Requirement:
- At least 4 years of experience working as a data engineer or backend developer in a big data field.
- Solid working knowledge of implementing the optimal data structures and algorithms to create efficient and scalable applications in Java or Python.
- Experience in application integration with Snowflake, Oracle and MS-SQL.
- Experience in the Systems Development Life Cycle implementation methodology (SDLC) and/or Agile methodologies like Scrum and Kanban.
- Hands-on experience of using Linux (or Unix-like OS) as the development environment and familiar with shell scripts and command line tools.
- Understand and apply the good industry practice of code versioning, testing, CICD workflow and code documentation.
- Familiarity with AWS services will be an added advantage.
- Good team player, with strong analytical skills and enjoy complex problem solving with innovative ideas.
- Good communication and people skills required to interact with data analysts, business end-users and vendors to design and develop solutions.
- Good at working with details and is meticulous for operations.