Singlife is a leading homegrown financial services company, offering consumers a better way to financial freedom. Through innovative, technology-enabled solutions and a wide range of products and services, Singlife provides consumers control over their financial wellbeing at every stage of their lives.
In addition to a comprehensive suite of insurance plans, employee benefits, partnerships with financial adviser channels and bancassurance, Singlife offers investment solutions through its dollarDEX and Navigator platforms. The mobile-first Singlife Account – with a Singlife Debit Card – allows customers to save, spend, earn and be insured all in one app.
Singlife is the exclusive insurance provider for the Ministry of Defence, Ministry of Home Affairs and Public Officers Group Insurance Scheme. Singlife is also an official signatory of the United Nations Principles for Sustainable Insurance, affirming its commitment to finding a better way to sustainability.
First announced in September 2020 and valued at S$3.2 billion, the merger of Aviva Singapore and Singlife was the largest insurance deal in Singapore then and created one of the largest homegrown financial services companies in the republic.
Key Responsibilities
- Develops and maintains scalable data pipelines that collect, process and store large volume of complex data from various sources.
- Oversee the design, implementation and maintenance of data storage and processing systems, including databases, data warehouse and data lakes.
- Manage a team of data engineers, providing guidance and setting the right goals for the data engineering team.
- Collaborates with data scientists, analysts and other stakeholders to understand their data needs and provide solutions that meet those needs.
- Develop and implement data quality checks and data governance policies to ensure accuracy and consistency of data.
- Identify and resolve performance issues in data processing pipelines to ensure timely and efficient data delivery.
- Keep abreast of new technologies and trends in data engineering and recommend new tools and techniques to improve data processing and analysis.
- Develop project plans, tracks progress and manage budgets to ensure timely and cost-effective delivery of data engineering projects.
- Create and maintain documentation for data engineering processes, systems and tools to ensure knowledge transfer and continuity.
- Provide technical leadership and guidance to the data engineering team ensuring the best practices are followed and technical standards are maintained.
Requirements
- BS or MS degree in Computer Science or a related technical field
- Experience leading a technical team of data engineers.
- Strong data warehousing experience using RDBMS and Non-RDBMS databases.
- Strong capability in AWS and distributed systems.
- Experience with designing, building, and maintaining data processing systems, large scale datasets, data lake and data warehouse technologies on at least TB scale (ideally PB scale of datasets) (Preferred: Snowflake)
- Data pipeline workflow: Airflow, Luigi or similar.
- Big Data processing framework: Apache Spark, Apache Beam, etc.
- Strong experience in ETL (AWS Glue), Amazon S3, Amazon Redshift, Amazon RDS, Amazon Kinesis, Amazon Lambda, Apache Airflows, Amazon Step Functions.
- Strong knowledge in scripting languages like Python, UNIX shell and Spark is required.
- CI/CD pipeline: AWS Code Build / Code Pipeline, Gitlab/TeamCity Github Actions
- AI/ML experience (Jupiter&/Python)
- Experience with full SDLC lifecycle and Lean or Agile development methodologies.
- Understanding of RDBMS, Data ingestions, Data flows, Data Integrations etc.
- Technical expertise with data models, data mining and segmentation techniques.
- API design and development experience
If you find yourself able to demonstrate the criteria above, apply with us now. We look forward to your application.