The candidate will be responsible to support business owners, data scientists & technical manager for data engineering related services, enabling them to develop quick prototypes with specific data sets, test hypothesis and build data models across various internal & external data sources. He/she will be required to collaborate with the various divisions of the company to provide data engineering and embedded analytics solutions, Extract, Transform, Load (ETL) workflows; identify, analyse, and interpret trends or patterns in complex data sets and guide business owners to achieve their business objectives through data insights.
The candidate must have a wide data engineering, management, analysis & protection related knowledge & working experience. He/she should have the relevant expertise in data profiling and cleaning, data management & access control, data architecture design and data scanning.
He/she should also be proficient in performing data analytics, report writing and presenting findings. He/she should process strong analytical skills to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy.
Key Responsibilities:
1. Support business owners, data scientists and technical manager in data driven experiments, testing hypothesis and use cases using varied data sources of the company.
- Design and own Dashboards in Tableau, MicroStrategy and Grafana or other data analytics tools and develop/deploy embedded analytics solution by integrating real time BI dashboards on business applications.
- Work with various divisions to understand key business problems and/or prioritise business and information needs by recommending improvement actions through data mining, statistical analysis, and data visualisation techniques.
- Work on large-scale structured and unstructured data sets to solve a wide array of challenging problems, using analytical, statistical, machine learning approaches to build ML models.
- Design and develop robust, scalable, and efficient data pipelines for collecting, processing, and storing large volumes of structured and unstructured data.
- Implement Extract, transformation, and Load (ETL) solutions for new data sources for Data Platform.
- Optimise data pipelines for performance, scalability, and cost-effectiveness, considering factors such as data volume, processing speed, and resource utilisation.
- Stay up to date with the latest developments in data engineering and AI/ML technologies and contribute to the adoption of best practices within the team.
Qualifications:
1. Master’s degree in computer science engineering, or a related field with minimum 2 years working experience in the IT industry.
- Proven experience as a Data Engineer, BI developer, or related role, with a focus on building data pipelines for data analytics and machine learning projects.
- Strong proficiency in programming languages, such as Python, Java, XML as well as SQL for data analysis.
4. Familiarity with business intelligence visualization tools (MicroStrategy, Tableau, Grafana…etc.), databases (Redshift, Oracle, etc.)
- Understanding of cloud platforms, such as AWS, GCP or Azure; and experience with cloud-based data services.
- Understanding of database systems, data modelling and ETL principles.
- Strong problem-solving skills and the ability to work in a fast-paced, collaborative environment.
- Excellent communication skills and the ability to effectively communicate complex technical concepts to non-technical stakeholders.
- Excellent presentation skills