Job Description:
- Be part of a team to build and maintain data systems or pipelines.
- Perform setup, installation, configuration, troubleshooting and/or upgrade for COTS products.
- Develop or implement ways to improve data warehouses, data lakes or equivalent platforms.
- Involvement in the creation of documentation e.g. design documents, troubleshooting guides etc.
Job Requirements:
- Degree in Computer Engineering/Computer Science/Information Technology or related technical discipline.
- Preferably 1 - 2 of working experience in related fields.
- Fresh graduates are welcome to apply.
- Knowledge and/or experience in data management or data engineering.
- Experience with Linux commands and shell script.
- Knowledge and/or experience in relational (including SQL) or NoSQL database (e.g. document, graph).
- Knowledge and/or experience in one or more of the following will be an advantage: Data / Search / Automation platforms such as Hadoop, Elasticsearch, Ansible respectively.
Data integration tools such as Talend, DataStage, Denodo.
o Programming languages such as Python, Spark.
o Microsoft Azure Cloud services such as Azure Data Factory, Azure Synapse Analytics.
o Analytics platforms such as Databricks, Dataiku, Data Robot. - Good problem-solving skills.
- Able to work independently and as a team.
Only shortlisted candidates will be notified.
Please email a copy of your detailed resume to [email protected] for immediate processing.
(EA Reg No: 20C0312)