DESCRIPTION/RESPONSIBILITIES:
We know that people want great value combined with an excellent experience from a bank they can trust, so we launched our digital bank, Chase UK, to revolutionize mobile banking with seamless journeys that our customers love. We're already trusted by millions in the US and we're quickly catching up in the UK – but how we do things here is a little different. We're building the bank of the future from scratch, channeling our start-up mentality every step of the way – meaning you'll have the opportunity to make a real impact.
As a Lead Data Engineer at JPMorgan Chase within the International Consumer Banking, you will be a part of a flat-structure organization. Your responsibilities are to deliver end-to-end cutting-edge solutions in the form of cloud-native microservices architecture applications leveraging the latest technologies and the best industry practices. You are expected to be involved in the design and architecture of the solutions while also focusing on the entire SDLC lifecycle stages.
Our Engineering team is at the heart of this venture, focused on getting smart ideas into the hands of our customers. We're looking for people who have a curious mindset, thrive in collaborative squads, and are passionate about new technology. By their nature, our people are also solution-oriented, commercially savvy and have a head for fintech. We work in tribes and squads that focus on specific products and projects – and depending on your strengths and interests, you'll have the opportunity to move between them.
Job responsibilities
• Generates data models for their team using firmwide tooling, linear algebra, statistics, and geometrical algorithms
• Delivers data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable way
• Implements database back-up, recovery, and archiving strategy
• Evaluates and reports on access control processes to determine effectiveness of data asset security with minimal supervision
• Adds to team culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities, and skills
• Bachelor’s degree or equivalent in computer science or related fields
• Minimally 5 years of experience on data engineering
• Working experience with both relational and NoSQL databases
• Experience with database back-up, recovery, and archiving strategy
• Proficient knowledge of linear algebra, statistics, and geometrical algorithms
• Deep understanding of distributed systems and cloud technologies (AWS, GP, Azure, etc.)
• Experience with SQL (any dialect) and Data tools (ie. Dbt)
• Experience in the all stages of software development lifecycle (requirements, design, architecture, development, testing, deployment, release and support)
• Experience with large scale datasets, data lake and data warehouse technologies on at least TB scale (ideally PB scale of datasets) with at least one of {BigQuery, Redshift, Snowflake}
• Ability to work in a dynamic, agile environment within a geographically distributed team
Preferred qualifications, capabilities, and skills
• Experience with a scheduling system (Airflow, Azkaban, etc.)
• Understanding of (distributed and non-distributed) data structures, caching concepts, CAP theorem
• Understanding of security frameworks / standards and privacy
• Experience in automating deployment, releases and testing in continuous integration, continuous delivery pipelines
• Experience with containers and container-based deployment environment (Docker, Kubernetes, etc.)
• A solid approach to writing unit level tests using mocking frameworks, as well as automating component, integration and end-to-end tests
To apply for this position, please use the following URL:
https://ars2.equest.com/?response_id=fd67088ca74c88c330df83f502968df0