We are looking for Data Engineer to support our project in Singapore.
The Data Engineer will support the development and maintenance of the global AI sandbox, development of cutting-edge AI solutions, R&D, and provide technical guidance and support in the data domain to the teams under incubation that develop AI Products.
The role requires a strong technical background in Data Engineering, AI, and modern Cloud infrastructure, excellent technical skills, and the ability to collaborate with both internal and external stakeholders across technology, AI, and business to deliver impactful AI products. The Data Engineer will work on the delivery of high-quality data infrastructure, continuous improvement, and experimentation, and development of the company’s data expertise.
Key Responsibilities:
In this role, you will be an active member of the AI Lab, playing the role of Data Engineer to support the implementation of solutions built in AI labs:
- Build data pipelines to bring in wide variety of data from multiple sources within the organization as well as from relevant 3rd party sources.
- Collaborate with cross functional teams to source data and make it available for downstream consumption.
- Work with the team to provide an effective solution design to meet business needs.
- Ensure regular communication with key stakeholders, understand any key concerns in how the initiative is being delivered or any risks/issues that have either not yet been identified or are not being progressed.
- Ensure timelines (milestones, decisions, and delivery) are managed and value of initiative is achieved, without compromising quality and within budget.
- Ensure an appropriate and coordinated communications plan is in place for initiative execution and delivery, both internal and external.
Qualifications:
- Strong technical background in data, AI, and modern cloud infrastructure.
- Excellent programming skills in languages such as Python, SQL.
- Experience with Cloud Infra providers like Azure data bricks, Google cloud platform or AWS
- Experience in building data pipelines using batch processing with Apache Spark (Spark SQL, Dataset / Dataframe API) or Hive query language (HQL)
- Knowledge of Big data ETL processing tools
- Experience with Hive and Hadoop file formats (Avro / Parquet / ORC)
- Basic knowledge of scripting (shell / bash)
- Experience of working with multiple data sources including relational databases (SQL Server / Oracle / DB2 / Netezza), NoSQL / document databases, flat files
- Basic understanding of CI CD tools such as Jenkins, JIRA, Bitbucket, Artifactory, Bamboo and Azure Dev-ops.
- Basic understanding of DevOps practices using Git version control
- Ability to debug, fine tune and optimize large scale data processing jobs