Job Description:
- Be part of a team to build and maintain data systems or pipelines.
- Perform setup, installation, configuration, troubleshooting and/or upgrade for COTS products.
- Develop or implement ways to improve data warehouses, data lakes or equivalent platforms.
- Involve in the creation of documentations e.g. design documents, troubleshooting guides etc.
Skill Sets:
- Knowledge and/or experience in data management or data engineering
- Experience with Linux commands and shell script
- Knowledge and/or experience in relational (including SQL) or NoSQL database (e.g. document, graph)
- Knowledge and/or experience in one or more of the following will be an advantage:
- Data / Search / Automation platforms such as Hadoop, Elasticsearch, Ansible respectively
- Data integration tools such as Talend, DataStage, Denodo
- Programming languages such as Python, Spark
- Microsoft Azure Cloud services such as Azure Data Factory, Azure Synapse Analytics
- Analytics platforms such as Databricks, Dataiku, Data Robot
- Good problem-solving skills
- Able to work independently and as a team
Education / Experience:
- Diploma/Degree in Computer Engineering/Computer Science/Information Technology or related technical discipline
- Preferably 1 - 3 years' of working experience in related fields.
- Fresh graduates are encouraged to apply
Only shortlisted candidates will be notified.
Please email a copy of your detailed resume to [email protected] for immediate processing.
(EA Reg No: 20C0312)