x
Get our mobile app
Fast & easy access to Jobstore
Use App
Congratulations!
You just received a job recommendation!
check it out now
Browse Jobs
Companies
Campus Hiring
Download App
Jobs in Singapore   »   Jobs in Singapore   »   Information Technology Job   »   Cloud Data Architect
 banner picture 1  banner picture 2  banner picture 3

Cloud Data Architect

Thales Dis (singapore) Pte. Ltd.

Thales Dis (singapore) Pte. Ltd. company logo

As a Cloud Data Architect in AIR Lab, you are seen by the members of your Data team to not only be a pillar of strength but a source of motivation and inspiration as well. You should be someone who enjoys designing, discussing topics around processing patterns like data quality control, streaming SQL, data sources/sinks and data synchronization; streaming-backfill, stream-to-stream joins etc. You should be someone who cares about the quality of the technical implementation and delivery as much as you care about the quality of delivery. You should be someone who enjoys working in a team of diverse people with multiple ethnic and cultural backgrounds. You should be someone who enjoys diving into the technical details of figuring out a problem and be able to communicate the solution back to the team so that the members can learn from it. You should be someone who loves learning new technologies and find innovative ways to apply newfound knowledge and be courageous to encourage fellow team members to be like YOU and enjoy participating in all aspects of engineering activities in the AIR Lab.


Responsibilities:

  • Plan, maintain and improve the DataLake cybersecurity posture with regards to data governance and cybersecurity standards by working with other stakeholders (e.g., Data Assessment Office, Cybersecurity Office).
  • Plan, maintain and improve the DataLake service levels for reliable data flow, health of infrastructure (i.e., computer and storage).
  • Plan, maintain and improve the total-cost of ownership of the DataLake; this activity includes raising efficiencies around FinOps, CloudOps.
  • Maintain and improve the architecture transforming data between the DataLake and a distributed search and analytics engine (e.g., ElasticSearch)
  • Lead the evolution of the DataLake by conducting the following activities (non-exhaustively) exploring new methods, techniques, algorithms (e.g., data meshes, AI/MLOps infrastructure).
  • Plan, maintain the data model, data catalogue (e.g., event data, batched data, persisted, ephemeral).
  • Mentor and coach the data team.
  • To implement features by defining test, develop feature and associated automated tests. If appropriate, implement security tests and load tests.
  • Write and review the necessary technical and functional documentation in documentation repositories (e.g., backstage.io, JIRA, READMEs).
  • Work in an agile, cross-functional multinational team, actively engaging to support the success of the team.

Requirements:


Education

  • Bachelors in Computer Science or Information Technology
  • Master’s degree in Computer Science or Data Science, if applicable

Essential Skills/Experience

  • Proficiency in designing, implementing ETL data pipelines (with structured or unstructured data) using the frameworks like Apache Dataflow / Apache Beam, Apache Flink; proficient in deploying ETL pipelines into Kubernetes cluster in Azure cloud either as virtual machines or containerized workloads.
  • Proficiency in designing, implementing data lifecycle management using scalable object-storage systems like MinIO (e.g., tiering, object expiration, multi-nodal approach)
  • Demonstrated application of working with Continuous Integration and/or Continuous Delivery models; you are expected to be familiar with using Linux (e.g., shell commands)
  • Proficiency in distributed source code management tools like GitLab, Github
  • With respects to ETL pipelines, you are expected to demonstrate proficiency in the following.
    • pipeline configuration using Gitlab
    • environment management using Gitlab; it’s a bonus if you have demonstrated experience in deployment management (e.g., canary, blue/green rollouts) using Gitlab
  • Familiar with cloud deployment strategies to public clouds (e.g., Azure Cloud, AWS, GCP) and Kubernetes using virtualized and containerized workloads (e.g., Kaniko, Docker, Virtual Machine)
  • Good communication skills in English

Desirable Skills/Experience

  • Working knowledge of designing application with a “shift-left” cybersecurity approach.
  • Working knowledge of other languages (e.g., Python, Scala, Go, TypeScript, C, C++, Java)
  • Implement event-driven processing pipelines using frameworks like Apache Kafka, Apache Samza
  • Familiar with the cloud deployment models (e.g., public, private, community and hybrid)
  • Familiar with the main cloud service models: Software as a Service, Platform as a Service and Infrastructure as a Service.
  • Familiar with designing and/or implementing AI/MLOps pipelines in public cloud (e.g., Azure, AWS, GCP)

Essential / Desirable Traits

  • Possess learning agility, flexibility and pro-activity
  • Comfortable with agile teamwork and user engagement
✱   This job post has expired   ✱

Sharing is Caring

Know others who would be interested in this job?