Job description (In Detail):
- At least 5 years of experience working as a data engineer or backend developer in a big data field.
- Solid working knowledge of implementing the optimal data structures and algorithms to create efficient and scalable applications in Java or Python.
- Experience in application integration with Snowflake, Oracle and MS-SQL.
- Experience in the Systems Development Life Cycle implementation methodology (SDLC) and/or agile methodologies like Scrum and Kanban.
- Hands-on experience of using Linux (or Unix-like OS) as the development environment and familiar with shell scripts and command line tools.
- Understand and apply the good industry practice of code versioning, testing, CICD workflow and code documentation.
- Familiarity with AWS services will be an added advantage.
- Good team player, with strong analytical skills and enjoy complex problem solving with innovative ideas.
- Good communication and people skills required to interact with data analysts, business end-users and vendors to design and develop solutions.
- Good at working with details and is meticulous for operations.
- Work closely with data product team and business end-users to implement and support data platforms using best-of-breed technology and methodology.
- Design and build robust and scalable data ingestion and data management solutions for batch-loading and streaming data sources.
- Enable ingestion checks and data quality checks for all data sets in the data platform and ensure the data issues are actively detected, tracked, and fixed without breaching SLA.
Work with members in the team to establish best practice and internal processes to enhance the data pipeline operations
SKILLS: Snowflake, CICD, ORACLE, UNIX, LINUX, aws
Please refer to U3’s Privacy Notice for Job Applicants/Seekers at https://u3infotech.com/privacy-notice-job-applicants/. When you apply, you voluntarily consent to the collection, use and disclosure of your personal data for recruitment/employment and related purposes.