- Good Base+great benefits!
- Experienced in leading a team, familiar with building CI/CD data pipelines, Git/Docker, Sparks/Hadoop etc
- Great environment to work in!
- Opportunity to work w/ different projects!
- Leading IT Company+friendly & diverse culture!
Our client is one of the leading Technology organisation
Responsibilities:
- Translate data requirements from business users into technical specifications.
- Collaborate with stakeholder’s IT teams on technology stack, infrastructure and security alignment.
- Build out data product as part of a data team
- Architect and build ingestion pipelines to collect, clean, merge, and harmonize data from different source systems.
- Day-to-day monitoring of databases and ETL systems, e.g., database capacity planning and maintenance, monitoring, and performance tuning; diagnose issues and deploy measures to prevent recurrence; ensure maximum database uptime;
- Construct, test, and update useful and reusable data models based on data needs of end users.
- Design and build secure mechanisms for end users and systems to access data in data warehouse.
- Research, propose and develop new technologies and processes to improve agency data infrastructure.
- Collaborate with data stewards to establish and enforce data governance policies, best practices and procedures.
- Maintain data catalogue to document data assets, metadata and lineage.
- Implement data quality checks and validation processes to ensure data accuracy and consistency.
- Implement and enforce data security best practices, including access control, encryption and data masking, to safeguard sensitive data.
Requirements:
- Bachelor in Computer Science, Information Technology, Software Engineering disciplines are preferred
- Deep understanding of system design, data structure and algorithms, data modelling, data access, and data storage.
- Demonstrated ability in using cloud technologies such as AWS, Azure, and Google Cloud.
- Experience in architecting data and IT systems.
- Experience with orchestration frameworks such as Airflow, Azure Data Factory.
- Experience with distributed data technologies such as Spark, Hadoop.
- Proficiency in programming languages such as Python, Java, or Scala.
- Proficiency in writing SQL for databases
- Familiarity with building and using CI/CD pipelines.
- Familiarity with DevOps tools such as Docker, Git, Terraform
Other Information:
- Working Hours: Mon-Fri, 9am-6pm, flexible work hours/hybrid work arrangement & WFH)
- Location: Near Buona Vista MRT
To apply please click on the LOGIN TO APPLY button with the following details inside your resume for faster processing:
- Reason for leaving
- Last drawn Salary
- Expected Salary
- Earliest availability date
We regret only shortlisted candidates will be notified. By submitting any application or résumé to us, you will be deemed to have agreed and consented to us collecting, using, retaining and disclosing your personal information to prospective employers for their consideration.
Jiang Yiang Dong
EA License | 14C7092
EA Registration Number | R1105012