- Possess a degree in Computer Science/Information Technology or related fields.
- At least 3 years of experience in a role focusing on data pipelines.
- Experience in building on data platforms (e.g. Snowflake,Redshift, Databricks).
- Proficient in SQL and Python.
- Experience in Continuous Integration and Continuous Deployment (CICD).
- Experience in Software Development Life Cycle (SDLC) methodology.
- Experience in data warehousing concepts.
- Strong problem solving and troubleshooting skills.
- Strong communication and collaboration skills.
- Able to design and implement solution and perform code review.
- Agile and fast learner and able to adapt to change.
- Experience in building on data transformation frameworks (e.g. dbt, Dataform).
- Cloud environments (e.g. AWS, GCP, Azure).
- Big data technologies (e.g. Spark).
- Data platform migration.
- Experience in ETL tools (such as Informatica), and data virtualization platforms such as Denodo is preferred.
- Provide BAU support for production job monitoring,issue resolution, bug fixes.
- Work closely with data stewards, data analysts and business end users to implement and support data solutions.
- Responsible for requirement gathering, design and development, SIT testing, support UAT and CICD deployment for enhancement and new ingestion pipeline.
- Ensure compliance with IT security standards, policies, and procedures