x
Get our mobile app
Fast & easy access to Jobstore
Use App
Congratulations!
You just received a job recommendation!
check it out now
Browse Jobs
Companies
Campus Hiring
Download App
Jobs in Singapore   »   Jobs in Singapore   »   Information Technology Job   »   Senior Data Platform Engineer (Python, Hadoop, Spark, Azure, Kafka) (Contract)
 banner picture 1  banner picture 2  banner picture 3

Senior Data Platform Engineer (Python, Hadoop, Spark, Azure, Kafka) (Contract)

Ntt Singapore Pte. Ltd.

Responsibilities:

•Design & Architecture: You help to design and implement an optimal data pipeline architecture for the analytics system. Identify, design, and implement improvements to automate manual processes, optimize speed and efficiency of data delivery, architecting and designing for high availability, scaling, and reliability. Architect and design infrastructure to facilitate data extraction, transformation, export, and query of data from a variety of data sources, both internal and external, both small and large.

• Massive data: You will source / examine, analyze, engineer data pipelines for gigabytes/terabytes of structured and unstructured data with our platform to create value for customers. You will also be working with Enterprise data.

• Pushing the limits: This role will be on the cutting edge of our Data / Machine Learning platform. As we push to solve more of our customer challenges, you will be prototyping new features, tools, and ideas. Innovate at a very fast pace to maintain our competitive edge.

• Distributed Processing Engine: You will be masterfully working on the data platform which involves processing of both unbounded and bounded data sources, perform im-memory computation & transformation.

• Production deployment: You will be responsible for integration and deployment of the Data Ingestion & machine learning pipelines into production where your ideas can come to life.

• Work with stakeholders including the Data Science teams, Business Systems Analysts, and Architecture teams to assist with data platform technical and organizational issues and support the company’s data and analytics needs.

• Educate, train, and mentor members of the Data Engineering and Analytics teams in the design, implementation, and usage of modern data systems

Skills: .

• Qualifications for Data Platform Engineer

Key Requirements:

• IT / CS fundamentals: You have earned at least a Bachelor’s / Master’s in Information Technology / Computer Science, or related degree AND you have a strong ethos of continuous learning.

Commercial software engineering: You have 11+ years of professional software development experience in multiple programming languages, including modern virtual-machine languages such as Java, as well as common scripting and glue languages such as Python and version control (git), with good analytical & debugging skills. Experience performing root-cause analysis on bugs and performance problems in distributed systems, including network and source-level debugging.

Big data: You have extensive experience with data analytics and working knowledge of big data infrastructure such as Google Cloud, Big Query, Data Flow, Hadoop Eco System, HDFS, Apache Storm, Apache Spark. You've routinely built data pipelines with gigabytes/terabytes of data and understand the challenges of manipulating such large datasets.

Cloud Exposure: Strong experience implementing systems and applications using distributed and cloud infrastructure. GCP preferred, but AWS or Azure are also okay.

Data Modeling: Flair for data, schema, data model, PL/SQL, Star & snowflake schema, how to bring efficiency in data modeling for efficient querying data for analysis, understands criticality TDD and develops data validation techniques. Working knowledge of distributed relational and tabular data stores, message queues, stream processing facilities, and other scalable big-data platform technologies.

Real Time Systems: Understands evolution of databases for in-memory, NoSQL & indexing technologies along with experience on real-time & stream processing systems like Google pub/sub, GCP technologies, Kafka, Storm, Spark Streaming.

Strong design skills: with a proven track record of success on large/highly complex projects preferably in Enterprise Apps and Integration. Advanced, hands-on knowledge of design, implementation, and optimization of big data architectures, pipelines, and data sets.

✱   This job post has expired   ✱

Sharing is Caring

Know others who would be interested in this job?