The ideal candidate will have a strong background in DevOps, MLOps, and DataOps, with expertise in cloud infrastructure, container platforms, and machine learning model deployment.
Responsibilities:
- Design, implement, and maintain cloud infrastructure and container platforms
- Develop and manage DevOps, MLOps, and DataOpspipelines and workflows
- Deploy and manage machine learning models and data pipelines
- Implement and maintain code deployment tools and processes
- Troubleshoot and optimize operating systems, middlewares, and system services
- Collaborate with software development teams to ensure smooth integration of ML models and data pipelines
- Stay up-to-date with the latest trends and best practices in DevOps, MLOps, and cloud technologies
Requirements:
- Bachelor's degree or higher in Computer Science, Information Technology, or equivalent field
- Minimum 3 years of experience as a DevOps Engineer or in an equivalent software engineering role
- Minimum 2 years of experience as an MLOps/DataOpsEngineer or in an equivalent data engineering role
- Proven experience with cloud infrastructure, container platforms, and DevOps tools (e.g., Jenkins, GitLab, Jira)
- Strong proficiency in Linux operating systems, Python, and JavaScript
- Hands-on experience with AWS SageMaker
- Relevant IT project implementation and deployment experience
- Required certifications and passed Technical Assessment
Certifications (Preferred):
- AWS Certified Solutions Architect or Developer
- Azure Solutions Architect Expert
- Google Professional Cloud Architect
- Certified Kubernetes Administrator (CKA)
- MLOps or DataOps-related certifications (if available)