x
Get our mobile app
Fast & easy access to Jobstore
Use App
Congratulations!
You just received a job recommendation!
check it out now
Browse Jobs
Companies
Campus Hiring
Download App
Jobs in Singapore   »   Jobs in Singapore   »   Information Technology Job   »   Data Engineer (Azure)
 banner picture 1  banner picture 2  banner picture 3

Data Engineer (Azure)

Kerry Interim Pte. Ltd.

Kerry Interim Pte. Ltd. company logo

Responsibilities:

  1. Development and maintenance of data pipelines to extract, transform, and load data from diverse sources into a centralized storage system like a data warehouse or data lake.
  2. Integration of data from multiple origins and systems, encompassing databases, APIs, log files, streaming platforms, and external data providers.
  3. Creation of data transformation routines for data cleansing, normalization, and aggregation, along with employing data processing techniques to handle intricate data structures, address missing or inconsistent data, and prepare data for analysis, reporting, or machine learning tasks.
  4. Contribution to the development of common frameworks and best practices in code development, deployment, and automation/orchestration of data pipelines.
  5. Implementation of data governance aligned with company standards.
  6. Collaboration with Data Analytics and Product leaders to establish best practices and standards for the development and operationalization of analytic pipelines.
  7. Collaboration with Infrastructure leaders on architectural strategies to enhance the data and analytics platform, including exploration of new tools and techniques leveraging the cloud environment (Azure, Databricks, among others).
  8. Monitoring and providing support for data pipelines and systems to promptly detect and resolve issues. Development of monitoring tools, alerts, and automated error handling mechanisms to ensure data integrity and system reliability.

Requirements:

  1. Demonstrated experience in a Data Engineering role (minimum 3 years), with a proven track record of delivering scalable data pipelines.
  2. Extensive expertise in designing data solutions, including data modeling.
  3. Hands-on experience in developing data processing jobs (PySpark/SQL) demonstrating a solid grasp of software engineering principles.
  4. Proficiency in orchestrating data pipelines using technologies like ADF, Airflow, etc.
  5. Familiarity with both real-time and batch data processing.
  6. Proficiency in building data pipelines on Azure, with additional experience in AWS data pipelines being advantageous.
  7. Proficiency in SQL (any flavor), with experience in utilizing Window functions and other advanced features.
  8. Understanding of DevOps tools, Git workflow, and the ability to build CI/CD pipelines.
  9. Experience with streaming data processing technologies such as Apache Kafka, Apache Flink, or AWS Kinesis can be advantageous, including the ability to design and implement real-time data processing pipelines.

To Apply:
Please submit your resume directly to [email protected], quoting the job title. Due to high volume of applications, only shortlisted candidates are notified.

Reg: R22104910
License: 22C0942

✱   This job post has expired   ✱

Sharing is Caring

Know others who would be interested in this job?

Similar Jobs