Find Your Dream Job With Us
WE ARE HIRING !! Apply now and make a difference.
Job Title: Data Engineer (Spark Scala Elastic)
Role: Developer
Job Requisition Number: JR28213, JR28214
Job Level: 1 - 9 years of relevant experience (L1 - L3)
Location: Singapore
Job Objectives
We are seeking a talented and experienced Spark Scala Developer with expertise in Elasticsearch to join our team. As a Spark Scala Developer, you will play a crucial role in designing, developing, and optimizing big data solutions using Apache Spark, Scala, and Elasticsearch. You will collaborate with cross-functional teams to build scalable and efficient data processing pipelines and search applications. Knowledge and experience in the Compliance / AML domain will be a plus.
Key Responsibilities
l Design, develop, and implement Spark Scala applications and data processing pipelines to process large volumes of structured and unstructured data
l Integrate Elasticsearch with Spark to enable efficient indexing, querying, and retrieval of data
l Optimize and tune Spark jobs for performance and scalability, ensuring efficient data processing and indexing in Elasticsearch
l Collaborate with data engineers, data scientists, and other stakeholders to understand requirements and translate them into technical specifications and solutions
l Implement data transformations, aggregations, and computations using Spark RDDs, DataFrames, and Datasets, and integrate them with Elasticsearch
l Develop and maintain scalable and fault-tolerant Spark applications, adhering to industry best practices and coding standards
l Troubleshoot and resolve issues related to data processing, performance, and data quality in the Spark-Elasticsearch integration
l Monitor and analyze job performance metrics, identify bottlenecks, and propose optimizations in both Spark and Elasticsearch components
l Stay updated with emerging trends and advancements in the big data technologies space to ensure continuous improvement and innovation
Key Requirements
l Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field
l Strong experience in developing Spark applications. Experience with Spark Streaming is a plus
l Proficiency in Scala programming language and familiarity with functional programming concepts
l In-depth understanding of Apache Spark architecture, RDDs, DataFrames, and Spark SQL
l Experience integrating and working with Elasticsearch for data indexing and search applications
l Solid understanding of Elasticsearch data modeling, indexing strategies, and query optimization
l Experience with distributed computing, parallel processing, and working with large datasets
l Familiarity with big data technologies such as Hadoop, Hive, and HDFS
l Proficient in performance tuning and optimization techniques for Spark applications and Elasticsearch queries
l Strong problem-solving and analytical skills with the ability to debug and resolve complex issues
l Familiarity with version control systems (e.g., Git) and collaborative development workflows
l Excellent communication and teamwork skills with the ability to work effectively in cross-functional teams
Please forward your resume in MS word format to [email protected] and [email protected]
Please refer your friends for any IT openings as we have various positions.
Please do not send PDF Format Resume . Send the following details
1. Notice period
2. Current Salary
3. Expected Salary
4. Visa Status in Singapore
5. Current Location