Key Responsibilities:
- Design, develop, and maintain robust data pipelines using PySpark, Dagster, Sqoop, Flume, and Informatica to extract, transform, and load data from various sources to target systems.
- Work with Teradata Utilities and DataStage for ETL processes and optimize data workflows for scalability and efficiency.
- Develop and maintain applications using C#, ASP.NET, and MVC frameworks for data-driven applications and web solutions.
- Work collaboratively with other software developers to integrate backend data solutions with web and application services.
- Use Python libraries (Pandas, NumPy) to clean, manipulate, and analyze large datasets, providing valuable insights to business units.
- Develop, optimize, and troubleshoot complex data transformations and aggregations to support analysis and reporting.
- Build, manage, and optimize data visualizations using Tableau, Power BI, Qlik Sense, and SSRS to empower stakeholders with insightful, actionable data.
- Create customized dashboards, reports, and interactive visualizations to communicate key performance metrics and business insights.
- Leverage knowledge of Teradata and other databases for query optimization, data extraction, and integration.
- Work with distributed computing environments (e.g., PySpark) to process large datasets and support advanced analytics.
- Ensure data quality and consistency across all data sources and pipelines, applying best practices for data governance.
- Perform data audits and implement monitoring solutions to detect and resolve data discrepancies and issues.
- Partner with data scientists, analysts, and business stakeholders to understand data needs and provide high-quality solutions.
- Communicate technical solutions and data insights effectively to non-technical stakeholders.
Qualifications:
- Bachelor’s degree in computer science, Information Technology, Data Science, or a related field.
- 3+ years of experience in data engineering, software development, or data analysis roles with expertise in the specified technologies.
- Proficiency in Python (Pandas, NumPy, PySpark) for data manipulation and analytics.
- Strong skills in C#, ASP.NET, and MVC for application development.
- Experience with ETL tools like Dagster, Informatica, Sqoop, Flume, and DataStage.
- Familiarity with big data technologies, including Spark and Teradata Utilities.
- Strong experience with Tableau, Power BI, Qlik Sense, and SSRS for reporting and dashboard development.
- Experience with Teradata, SQL Server, or similar relational databases for efficient querying and data extraction.
- Knowledge of data quality, governance, and data lifecycle management best practices.
Savita Rai
EA REG NO: R1873418
EA License No:23C2060
Disclaimer: The company is committed to ensuring the privacy and security of your information. By submitting this form, you consent to the collection, processing, and retention of the information you provide. The data collected (which may include your contact details, educational background, work experience and skills) will be used solely for the purpose of evaluating your qualifications for the position you're applying for. Your data will be stored securely and retained for the duration necessary to fulfill our hiring process. If you are not selected for the position, your data will be kept on file for a limited period in case future opportunities arise. You have the right to access, correct, or delete your data at any time by contacting us at Quess Singapore | A Leading Staffing Services Provider in Singapore (quesscorp.sg)