x
Get our mobile app
Fast & easy access to Jobstore
Use App
Congratulations!
You just received a job recommendation!
check it out now
Browse Jobs
Companies
Campus Hiring
Download App
Jobs in Singapore   »   Jobs in Singapore   »   Information Technology Job   »   Data Engineer (ETL, HADOOP, AWS, REDSHIFT)
 banner picture 1  banner picture 2  banner picture 3

Data Engineer (ETL, HADOOP, AWS, REDSHIFT)

Exasoft Pte. Ltd.

Exasoft Pte. Ltd. company logo

Responsibilities


• Join the data migration project, including transitioning legacy systems to more modern
system
• Design, build, and maintain data pipelines to enable seamless data extraction,
transformation, and loading (ETL) while ensuring high data quality throughout the
process.
• Identify and resolve data quality issues by implementing governance measures and
proposing optimization techniques to improve data accuracy and integrity.
• Involve in the design and enhancement of data governance frameworks, ensuring proper
documentation, lineage tracking, and compliance with best practices.
• Develop and maintain scalable data systems, including data warehouses and data lakes,
leveraging big data technologies like S3, Spark, Redshift, and Snowflake.
• Design and develop both backend and frontend components for internal and businessfacing tools.
• Collaborate with cross-functional teams, including business stakeholders, to define and
implement data governance policies and support their data needs.
• Explore and propose new technologies and tools to improve data operations and
governance, especially within cloud environments.
• Contribute to the design and implementation of data architecture models that support the
organization’s long-term data strategy.
• Monitor data systems and pipelines, ensuring smooth operation, and troubleshooting
issues to maintain high data quality and system reliability.

Requirements:


• At least 5+ years of experience in data engineering, with a focus on ETL processes, data
pipeline development, and system optimization.
• Strong understanding of data governance principles, including data quality management,
lineage tracking, and metadata management.
• Hands-on experience with cloud platforms and databases (AWS, Azure, Google Cloud) and
big data technologies (Hadoop, Spark, Redshift, Snowflake).
• Proficiency in Python and SQL for data processing and database design.
• Familiarity with data warehousing concepts and experience in designing and maintaining
data lakes and distributed databases.
• Experience working with Agile methodologies within cross-functional teams, with an
emphasis on data quality and governance.
• Understanding of containerization tools (e.g., Docker, Kubernetes) and microservices
architecture for managing pipelines.
• Strong numerical and analytical skills, with the ability to analyze complex data sources and
identify potential data governance issues.
• Proficiency in JavaScript ecosystem is a plus (e.g., ReactJS, NodeJS, VueJS

Sharing is Caring

Know others who would be interested in this job?