Job Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, collaborate with infra team to re-designing infrastructure for greater scalability and stability
- Collaborate with the infrastructure team for provisioning required infrastructure for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data needs
- Keep our data separated and secure across national boundaries through multiple data centres and AWS regions
- Create data tools for analytics and data scientist team members that assist them in building and optimizing models which enables us as an innovative industry leader
- Work with data and analytics experts to strive for greater functionality in our data systems
- Build tools from ground up that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics
- Prepare daily, weekly, monthly, yearly and ad hoc report and analysis for ecommerce team
- To utilize data and able to share insights with ecommerce team
- Stay abreast and able to share best practices for ecommerce data analytics
The Ideal Candidate
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with widely used RDBMS
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Strong project management and organizational skills
- Experience supporting and working with cross-functional teams in a dynamic environment
- Experience with object-oriented/object function scripting languages: Python and/or Java, C++, Scala, etc.
- Experience with big data tools: Hadoop, Spark, Kafka, NiFi, sqoop, etc.
- Experience with relational SQL and NoSQL databases.
- Experience with data pipeline and workflow management tools: Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, Kenesis, Firehose
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience in Qlikview and Qliksense is highly preferred
- Experience with AlicCloud environment (Alicloud, AWS)
- Experience with DevOps environment: Docker, Kubernetes, Git, CI/CD