Responsibilities
- Design, implement, and manage scalable data infrastructure using AWS services such as Amazon Redshift, Amazon RDS, Amazon S3, and Amazon DynamoDB.
- Monitor and optimize the performance, availability, and security of data platforms and storage solutions.
- Develop and maintain ETL (Extract, Transform, Load) pipelines to move and process data across various systems using AWS services like AWS Glue, AWS Lambda, and AWS Data Pipeline.
- Automate data workflows and ensure they are reliable, scalable, and efficient.
- Set up and manage monitoring and alerting systems using AWS CloudWatch and other tools to ensure the health and performance of data systems.
- Optimize query performance, data storage, and overall system efficiency.
- Implement security best practices to protect data, including encryption, access control, and compliance with industry standards and regulations.
- Work closely with stakeholders to understand their data needs and provide support for data-related issues.
- Collaborate with DevOps teams to integrate data pipelines with CI/CD (Continuous Integration/Continuous Deployment) processes.
- Document data infrastructure designs, data flows, and processes.
- Create reports and dashboards to provide insights into data operations and Diagnose and resolve data-related issues promptly to minimize downtime and impact on operations.
- Establish data governance frameworks to ensure proper data handling, ownership, and stewardship.
- Provide support for data recovery and disaster recovery procedures.
- Integrate AWS data solutions with third-party data processing, analytics, and visualization tools.
- Work on API development and integration to enhance data accessibility.
- Participate in architectural discussions and design sessions to align data solutions with overall cloud strategy.
- Develop and implement backup strategies to protect data integrity and ensure recovery options are available.
- Test disaster recovery plans regularly to ensure preparedness.
Profile
- Minimum of Bachelor’s degree or equivalent degree
- Possess at least 4 years of experience as DevOps Engineer or equivalent software-engineering role in implementing and deploying projects of similar nature and scope.
- Hands-on-experience on building cloud infrastructure, container platform and/or managing DevOps tools, developing pipeline & workflow and code deployment tools (e.g. Jenkins, GitLab, Jira, etc); (c)
- Possess at least 3 years of experience as MLOps/DataOps Engineer or equivalent data engineering role, with hands-on-experience on deploying machine learning models, data pipelines and/or managing MLOps/DataOps tools, developing pipelines & workflow and code deployment tools.
- Possess at least two 3 years of working experience in Linux operating system, Python, Javascript, AWS ECS & Fargate or relevant Container Platform, AWS Services.
- Demonstrate strong knowledge in Cloud Infrastructure, Singapore Government Technology Stack, Software Defined Networking, Infrastructure-as-Code, scripting and monitoring tools.
- Possess good technical knowledge in implementing, troubleshooting, performance tuning of operating system, middleware and system services.
- Prefer to have background in application development and familiar with software development lifecycle.
Certifications (Preferred)
- AWS Certified Solutions Architect – Associate or Professional
- AWS Certified Data Analytics – Specialty
- AWS Certified DevOps Engineer – Professional