Job Description:
Be part of a team to build and maintain data systems or pipelines.
Perform setup, installation, configuration, troubleshooting and/or upgrade for COTS products.
Develop or implement ways to improve data warehouses, data lakes or equivalent platforms.
Involve in the creation of documentations e.g. design documents, troubleshooting guides etc.
Education / Experience:
Diploma/Degree in Computer Engineering/Computer Science/Information Technology or related
technical discipline
Preferably 1 - 4 years' of working experience in related fields.
Fresh graduates are encouraged to apply
Skill Sets:
Knowledge and/or experience in data management or data engineering
Experience with Linux commands and shell script
Knowledge and/or experience in relational (including SQL) or NoSQL database (e.g. document,
graph)
Knowledge and/or experience in one or more of the following will be an advantage:
o Data / Search / Automation platforms such as Hadoop, Elasticsearch, Ansible respectively
o Data integration tools such as Talend, DataStage, Denodo
o Programming languages such as Python, Spark
o Microsoft Azure Cloud services such as Azure Data Factory, Azure Synapse Analytics
o Analytics platforms such as Databricks, Dataiku, Data Robot
Good problem-solving skills
Able to work independently and as a team
Only shortlisted candidates will be notified.
Please email a copy of your detailed resume to [email protected] for immediate processing.
(EA Reg No: 20C0312)