Our client, a leading financial services firm is looking to hire a Dataops Engineer. The DataOps Engineer, will be responsible for managing and monitoring the production environment for the Data platform in a multi-cloud setting. This role combines elements of data engineering and data operations. The role involves creating and managing data pipelines.
Main Responsibilities:
- Manage and monitor the central Data and AI platform and related tools/systems for advanced business analytics and enterprise data governance.
- Oversee day-to-day operations of the Data & Analytics platform, ensuring smooth and efficient data pipelines.
- Ensure service and data availability as per SLA, investigate, analyze, and resolve data/batch incidents within SLA, and implement proactive preventive measures.
- Provide technical advice to the team, participate in major incident calls, and lead the team for incident resolution.
- Create and monitor analytics dashboards for different business functions.
- Ensure the quality, integrity, and accuracy of datasets through tracked, secured, and auditable controls.
- Collaborate with stakeholders to address data-related technical issues and support their data needs.
- Follow and enforce best practices in software development and data operations.
- Perform L1/L2/L3 tasks for production monitoring and work closely with data engineers and the business analytics team to assist in data ingestion and data-related technical issues.
Requirements:
- Hands-on coding skills with Python, PySpark, and SQL, with an understanding of object-oriented analysis and design.
- Working knowledge of Microsoft Azure cloud services.
- Hands-on experience with Linux and shell scripting.
- Experience with cloud services and tools (AWS/Azure/GCP) and cloud data warehouse platforms.
- Experience with modern DevOps practices, including version control, TDD, CI/CD, etc.
If you or anyone within your network is keen to explore the opportunity then do share your CV with [email protected]