x
Get our mobile app
Fast & easy access to Jobstore
Use App
Congratulations!
You just received a job recommendation!
check it out now
Browse Jobs
Companies
Campus Hiring
Download App
Jobs in Singapore   »   Jobs in Singapore   »   Information Technology Job   »   Data Engineer
 banner picture 1  banner picture 2  banner picture 3

Data Engineer

Primestaff Management Services Pte Ltd

Primestaff Management Services Pte Ltd company logo

Data Engineer

Our client is one of the leading global banks. They are seeking a skilled and motivated Data Engineer to join the team in Singapore. The ideal candidate will have extensive experience in building and maintaining scalable data pipelines, data architecture, and deploying applications to cloud platforms. You will work with relational and NoSQL databases, handling structured and unstructured data, while supporting distributed data processing. If you're passionate about building efficient batch and streaming pipelines, this is the perfect opportunity for you.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and architectures.
  • Handle both structured and unstructured data sources.
  • Work extensively with relational databases (SQL) and NoSQL databases.
  • Deploy applications to cloud platforms such as AWS and Azure.
  • Collaborate with teams to build efficient batch and streaming data engineering pipelines.
  • Leverage distributed data processing platforms like Apache Spark to handle large datasets.
  • Continuously monitor and optimize data pipelines for performance and scalability.
  • Ensure data quality, security, and availability across systems.

Required Qualifications:

  • Degree in Computer Science or other relevant disciplines
  • Strong proficiency in SQL and experience with relational and NoSQL databases.
  • Experience deploying data applications with cloud platforms (AWS, Azure).
  • Proficiency in handling structured and unstructured data from multiple sources.
  • Demonstrated expertise in building data pipelines and data architecture.
  • Hands-on experience with distributed data processing platforms like Apache Spark.
  • Familiarity with batch and streaming data pipeline frameworks.
  • Excellent problem-solving skills and attention to detail.
  • Experience with data pipeline orchestration tools (e.g., Apache Airflow, Prefect).
  • Familiarity with containerization (Docker, Kubernetes).
  • Knowledge of ETL frameworks and processes.
  • Exposure to machine learning workflows and model deployment is a plus.

Sharing is Caring

Know others who would be interested in this job?