Work with teams from concept to operations, providing technical subject matter expertise for successful implementation of data solutions in the enterprise, using modern data technologies. This individual will be responsible for the planning, execution, and delivery of data initiatives. The role will also be responsible for expanding and optimising of data pipeline and architecture. This is a hands-on development role mainly using Databricks and Microsoft Azure data engineering skill sets and application development using Java and Python.
Responsibilities:
- Work collaboratively with relevant teams to define functional and technical requirements
- Document technical specifications, processes, and workflows for data pipelines and related systems
- Design, develop, and maintain scalable data pipelines to ingest, process, and store data from various sources into the operational data platform
- Design and develop intuitive, highly automated, self-service data platform functions for business users
- Optimise data processing and storage infrastructure for mission-critical, high-volume, near-real-time & batch data pipelines
- Implement data quality checks, monitoring, and alerting to ensure data accuracy and availability
- Participate in code reviews, testing, and deployment processes to maintain high standards of data engineering practices
- Troubleshoot and resolve data pipeline issues in a timely manner to minimise impact on business operations
- Contribute to the overall data architecture and strategy for the operational data platform
- Manage stakeholder expectations and ensure clear communication