Data Engineer (PySpark, Dagster)
Key Responsibilities:
Design, develop, and maintain robust data pipelines using PySpark, Dagster, Sqoop, Flume, and Informatica to extract, transform, and load data from various sources to target systems.
Work with Teradata Utilities and DataStage for ETL processes and optimize data workflows for scalability and efficiency.
Develop and main...