DFI Company Brief
DFI Retail Group (the ‘Group’) is a leading pan-Asian retailer. On 31st December 2021, the Group and its associates and joint ventures operated over 10,200 outlets and employed some 230,000 people. The Group had total annual sales in 2021 exceeding US$27 billion.
The Group provides quality and value to Asian consumers by offering leading brands, a compelling retail experience and great service; all delivered through a strong store network supported by efficient supply chains.
The Group (including associates and joint ventures) operates under several well-known brands across food, health and beauty, home furnishings, restaurants and other retailing.
The Group’s parent company, Dairy Farm International Holdings Limited, is incorporated in Bermuda and has a primary listing on the London Stock Exchange, with secondary listings in Bermuda and Singapore. The Group's businesses are managed from Hong Kong by Dairy Farm Management Services Limited through its regional offices.
DFI Retail Group is a member of the Jardine Matheson Group
Responsibilities:
- Implement Data SRE best practices to enhance the reliability, availability, and performance of our data infrastructure.
- Drive data operational excellence by introducing data system observability which includes monitoring, logging, and alerting.
- Collaborate with cross-functional teams, including data engineering, analytics, and other IT teams to understand data system requirements and deliver effective solutions.
- Monitor and troubleshoot data-related incidents, working closely with cross-functional teams to ensure quick resolution.
- Be a champion for Infrastructure as a Code philosophy and drive its adoption.
- Bring DevOps culture and mindset to the entire data engineering team.
- Stay informed about emerging technologies and industry trends in data management and SRE.
- Bachelor’s degree in technology, Engineering, or a related field.
- Preferably 5+ years of DevOps / SRE experience.
- Experience in maintaining highly scalable system in cloud environment.
- Good knowledge infrastructure and networking.
- Experience in building observability platform using tools such as Prometheus & Grafana, ELK stack.
- Experience in building CI/CD pipelines or platform using tools such as Jenkins, GitHub Actions, Bitbucket Pipeline, etc.
- Experience in driving Infrastructure as a Code project using tools such as Terraform.
- Experience in incident response for tech-related incidents
- Good to have knowledge about end-to-end data engineering workflow and data infrastructure (data lake, data pipelines, data visualisation tools, etc. Huge plus for having experience in working as DevOps for data team.
- Able to work independently in a fast-paced environment with minimum supervision.
- Good communication skills including strong written and spoken English.