One North Consulting is currently hiring Data Engineers with about 7+ years of experience as per details given below.
Interested Singapore Citizens / Singapore Permanent Residents can connect on email id - [email protected]
Skills & Responsibilities: -
As Data Engineer, you will focus on managing the Hadoop cluster, implementing data ingestion framework designing data models.
Your primary role will be to implement data lake & transform the data for business use.
- Experience with at least one Cloud Infra provider (Azure/AWS)
- Experience in building data pipelines using batch processing with Apache Spark (Spark SQL, DataSet / Dataframe API) or Hive query language (HQL)
- Knowledge of Big data ETL processing tools
- Write SQL scripts to test ETL results
- Write scripts to regression test the ETL iterations already completed
- Experience with Hive and Hadoop file formats (Avro / Parquet / ORC)
- Basic knowledge of scripting (shell / bash)
- Experience with CI CD tools such as Jenkins, JIRA, Bitbucket, Artifactory, Bamboo and Azure Dev-ops.
- Understanding of DevOps practices using Git version control
- Debug, fine tune and optimize large scale data processing jobs
- Deliver big data solutions based on premise Hadoop or cloud-based systems like AWS.
- Manage Hadoop cluster, participate in scale out planning & implementation.
- Design ingestion layer for structured & unstructured data (text, voice, xml etc)
- Implement specific data model for business & analytics use.
- Deliver ELT solution including data extraction, transformation, cleansing, data integration and data management.
- Augment with new sources of data including internal/external untapped data.
- Contribute to the establishment and maintenance of cloud computing platform and big data services.
- Provide support for analytics tools & environment like RServer etc & debug performance issues.
Competencies :-
- Databases: RDBMS, SQL programming
- ETL and Data Integration Tools: Azure Data Factory (ADF), Microsoft SQL Server Integration Service(SSIS), SAS Data Integration
- Big Data: Hadoop (Hortonworks), Hive, Spark, Sqoop, etc
- Programming and Scripting: Linux/Unix Shell Scripting, Java, Scala, Hive QL
- BI/Dashboarding: SAS Visual Analytics, Qliksense
Working Experience:
- 7+ years (no upper limit) in data engineering and modelling.
- Hands-on in MS SQL. Should be an expert in MS SQL queries and scripting.
- Hands-on experience in testing ETL systems is mandatory
- Hands-on in managing data mapping, data quality and integrity, performance in data processing.
- Develop and implement techniques and analytics applications to transform raw data into meaningful information
- Experience in Agile software development.
- Candidate should be able to create executable scripts for regression testing purposes
- Experience in testing SSIS ETL mappings is good to have
- Visualise, interpret, and report data findings and may create dynamic data reports as well
If interested, please email us your CV to [email protected]