x
Get our mobile app
Fast & easy access to Jobstore
Use App
Congratulations!
You just received a job recommendation!
check it out now
Browse Jobs
Companies
Campus Hiring
Download App
Jobs in Singapore   »   Jobs in Singapore   »   Information Technology Job   »   Senior Data Engineer
 banner picture 1  banner picture 2  banner picture 3

Senior Data Engineer

Secretlab Sg Pte. Ltd.

Secretlab Sg Pte. Ltd. company logo

Secretlab is an international gaming chair brand seating over two million users worldwide, with our key markets in the United States, Europe and Singapore, where we are headquartered.


You will be a Senior Data Engineer/ Data Engineer in our team, responsible for bringing Secretlab’s data infrastructure and analytics. The demand for clean serviceable stream/batch data has outstripped our ability to handle out of the box solutions; the demand for information has grown rapidly here at Secretlab. We’re looking for Data Engineers who are excited about bringing a start-up data culture to a new level.


Responsibilities:

To be successful

  • Setup AWS Cloud data infrastructure for data engineer or data science use-cases
  • Secure Data Infrastructure from breaches and lapses (Setup VPC, SAML, according to architectural best practices)
  • Design Data Model & Architecture for the data warehouse & other data systems
  • Develop star-schema and analytics and ML layers with Airflow, Data Built Tool, etc.
  • Develop Standard Template Packages for the rest of the team (e.g. logger templates and AWS, etc.)
  • Maintain a reliable data pipeline by following best practices (avoid accruing technical debt by unit testing, logging etc.).
  • Have MVPs and balance UI/UX/Function/Reliability - avoiding over or under-engineering pitfalls.
  • Build to Order pipelines need to be delivered against feature requests & user stories
  • Be comfortable with SaaS like Fivetran, DBT, S3, and Snowflake
  • Communicate clearly and concisely about all the aforementioned requirements as well as guide junior team members
  • PRs are bite-sized and easy to review with a 50% of PRs to clear within 1 review & 90% after 2 reviews

What your week will look like

  • AGILE Sprints with Business Intelligence Team to prune & prioritize backlog (Ops)
  • Develop data models (star-schema, event-based data marts, etc.)
  • Code reviews are part of the Data Team’s production process
  • Microbatch / Batch data in from source systems such as Shopify
  • Automate DBT pipelines with Orchestrators such’ as Airflow/Luigi
  • Establish connectors to downstream BI / DWH tools
  • Handle data processing errors and failures as they surface
  • Contributing process improvements and tool selections in the weekly retro (start, stop, continue)

Requirements:

Technical

  • SQL, Python ProficientFamiliarity with DevOps tools such as Git, Docker, terraform is a plus
  • Familiarity with DevOps tools such as Git, Docker, terraform is a plus
  • Familiar with error logging, bad record handling, etc.
  • Building human-fault-tolerant pipelines understanding how to scale up, addressing continuous integration, knowledge of database administration, maintaining data cleaning and ensuring a deterministic pipeline
  • Experience with the cloud (e.g., AWS, GCP)

Personality

  • Real passion for data, new data technologies, and discovering new and interesting solutions to the company’s data needs
  • Upfront and Candid Personality – someone who is eager to contribute to the continuous improvement of both team and process; open to accepting and giving feedback (especially in retros)
  • Honest and Pragmatic – someone who access their capabilities honestly without embellishment

Bonuses:

  • Prior experience with scaling up start-ups
  • Apache-Spark
✱   This job post has expired   ✱

Sharing is Caring

Know others who would be interested in this job?