Job Description & How to Apply Below
As a Senior Lead Data Engineer at JPMorgan Chase within the Commercial and Investment Bank's Markets Tech Team, you are an integral part of an agile team that works to enhance, build, and deliver data collection, storage, access, and analytics in a secure, stable, and scalable way. Leverage your deep technical expertise and problem solving capabilities to drive significant business impact and tackle a diverse array of challenges that span multiple data pipelines, data architectures, and other data consumers.
Job responsibilities
Design and build hybrid on-prem and public cloud data platform solutions
Design and buildend-to-end data pipelines for ingestion, transformation, and distribution, supporting both batch and streaming workloads
Develop and owndata products that are reusable, well-documented, and optimized for analytics, BI, and AI/ML consumers
Implement and managemodern data lake and Lakehouse architectures, including Apache Iceberg table formats
Implement interoperability across data platforms and tools, including Databricks, Snowflake, Amazon Redshift, AWS Glue, and Lake Formation
Establish and maintain end-to-end datalineage to support observability, impact analysis, and regulatory requirements
Implement data quality validation and monitoring using frameworks such as Great Expectations
Provide recommendations and insight on data management and governance procedures and intricacies applicable to the acquisition, maintenance, validation, and utilization of data, advises junior engineers and technologists.
Design and deliver trusted data collection, storage, access, and analytics data platform solutions in a secure, stable, and scalable way
Define database back-up, recovery, and archiving strategy and approves data analysis tools and processes
Create functional and technical documentation supporting best practices, evaluate and report on access control processes to determine effectiveness of data asset security
Required qualifications, capabilities, and skills
Formal training or certification on computer science concepts or equivalent and 5+ years applied experience
Hands-on experience building and operating batch and streaming data pipelines at scale
Experience with Apache Iceberg and modern table formats in Lakehouse environment
Strong proficiency with Databricks, Snowflake, Amazon Redshift, and AWS data services such as Glue and Lake Formation
Experience implementing data lineage, data quality, and data observability frameworks
Working experience with both relational and No
SQL databases
Advanced understanding of database back-up, recovery, and archiving strategy
Experience presenting and delivering visual data
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×