Big Data Engineer – 1-year contract
We are seeking a seasoned Big Data Engineer with 10+ years of experience in Spark (Scala or PySpark) and Hive. The ideal candidate will have strong programming skills and a deep understanding of big data processing within the Hadoop ecosystem.
Key Responsibilities
- Design & Development: Lead the design, development, and performance tuning of Spark applications.
- Programming: Utilize Java, Scala, or Python for efficient coding and problem-solving.
- Big Data Tools: Work with various big data processing tools and techniques, including Airflow and Ctrl-M.
- Distributed Systems: Apply your knowledge of distributed systems to enhance data processing efficiency.
- Database Knowledge: Leverage experience with RDBMS, data warehouses, and Unix shell scripting.
Qualifications
- Experience: 10+ years in big data technologies, with extensive knowledge of Spark and Hive.
- Skills: Strong analytical and problem-solving capabilities.
- Ecosystem Familiarity: Proficient in the Hadoop ecosystem and big data processing methodologies.