A 12-months contract assigned to our client
Work Location: To be confirmed (during interview)
Salary Range : $6000-$8000
Skills:
• Python
• Pyspark
• SQL
• Unix
• Minimally 2 to 3 yrs experience
As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your role will involve creating data pipelines, ensuring data quality, and implementing ETL (extract, transform, and load) processes to migrate and deploy data across systems. You are expected to be an SME, collaborate and manage the team to perform. You will be responsible for team decisions and will engage with multiple teams and contribute to key decisions. Additionally, you will provide solutions to problems for your immediate team and across multiple teams. Expert proficiency in Python (Programming Language) is required. Cloudera Hadoop with expert proficiency, Data Engineering with advanced proficiency, Data Pipelines with advanced proficiency, and PySpark with expert proficiency are recommended.
• Hands on experience in big data engineering jobs using Python, PySpark, Linux.
• Develop innovative data solutions to optimize data generation, collection, and processing.
• Track record in implementing systems using Hive, Impala and Cloudera Data Platform will be preferred
• Implement advanced ETL processes to ensure efficient data migration and deployment.
• Collaborate with cross-functional teams to identify and address data quality issues