×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer II

Job in Bengaluru, 560001, Bangalore, Karnataka, India
Listing for: JP Morgan Chase & Co.
Full Time position
Listed on 2026-02-08
Job specializations:
  • IT/Tech
    Data Engineer, Cloud Computing
Job Description & How to Apply Below
Location: Bengaluru

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.

As a Data Engineer II at JPMorgan Chase within the Employee Platforms team, youare part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you are responsible for overseeing Databricks adoption and supporting internal teams as they integrate with Databricks.

As a Software Engineer, you will collaborate with colleagues across the organization to deliver secure, scalable, and reliable technology solutions that drive business success.

Job responsibilities

Design, develop, and troubleshoot software solutions with a focus on Databricks integration and adoption
Build and maintain data pipelines using Databricks to ensure efficient and reliable data processing for internal teams
Write secure, high-quality production code in Python and participate in code reviews and debugging
Implement and support CI/CD pipelines to automate software delivery and improve operational stability
Collaborate with internal teams to provide guidance and best practices for Databricks usage
Participate in team discussions and activities to share knowledge and drive continuous improvement
Contribute to a positive team culture that values diversity, inclusion, and respect
Support the adoption of best practices in software engineering and data management
Troubleshoot and resolve issues related to data pipelines and software integration
Document technical processes, workflows, and solutions for team reference
Engage in ongoing learning to stay updated with advancements in Databricks, Python, and related technologies

Required qualifications, capabilities, and skills

Formal training or certification on    software engineering    concepts and 2+ years applied experience

Show proficiency in Python programming
Build and maintain data pipelines using Databricks
Apply practical experience with CI/CD tools and automation methods
Exhibit familiarity with agile development practices, application resiliency, and security
Participate in code reviews and debugging activities
Collaborate with teams to implement best practices for Databricks and data engineering
Troubleshoot and resolve technical issues in data pipelines
Document and communicate technical solutions effectively
Engage in continuous improvement and learning within the team environment

Preferred Qualifications , Capabilities, and Skills
Demonstrate experience with AWS services such as S3, EMR, Glue, ECS/EKS, and Athena
Obtain certifications in AWS, Databricks, or automation tools
Gain exposure to open table formats like Iceberg or Delta Lake and data catalog tools such as AWS Glue Data Catalog
Pursue interests in cloud computing, artificial intelligence, or mobile development
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary