Responsibilities:• Develop ETL processes to extract, transform, and load data from various sources into our data warehouse.
• Develop and maintain scalable data pipelines for batch and real-time processing.
• Collaboration with cross-functional teams to understand and meet data requirements.
• Monitor data pipelines and troubleshoot issues as they arise.
• Ensure data quality and integrity throughout the data pipeline.
• Optimize and tune existing data pipelines for performance and efficiency.
• Monitor data pipelines and troubleshoot issues as needed.
• Conducting performance tuning and optimization of database systems for improved efficiency.
• Designing and implementing data models and schema designs for efficient data storage and retrieval.
• Providing post-deployment support and troubleshooting assistance to address any issues or challenges encountered by users or stakeholders.
Requirements:• Proven experience as a Data Engineer or similar role.
• Ability to develop ETL processes for data extraction, transformation, and loading.
• Proficiency in programming languages such as Python, Java, or Scala.
• Experience with distributed computing frameworks (e.g., Apache Spark, Hadoop).
• Strong understanding of SQL and database technologies.
• Excellent problem-solving and communication skills.