More jobs:
Software Engineer II – Big Data & Analytics
Job in
Columbus, Franklin County, Ohio, 43224, USA
Listed on 2026-01-16
Listing for:
JPMorgan Chase & Co.
Full Time
position Listed on 2026-01-16
Job specializations:
-
IT/Tech
Data Engineer, Data Analyst
Job Description & How to Apply Below
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.
As a Software Engineer for Big Data and Analytics at JPMorgan Chase within Consumer and Community Banking Data Technology, you will be an integral part of an agile team that enhances, builds, and delivers trusted, market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you will be responsible for delivering critical technology solutions across multiple technical areas and business functions, supporting the firm’s objectives using Java, J2EE, Microservices, Python, Spark, Scala, and AWS for Business Banking Data Products.
Job Responsibilities- Oversee all aspects of data strategy, governance, data risk management, reporting, and analytics.
- Manage risks associated with data use, retention/destruction, and privacy.
- Design, develop, code, test, debug, and deploy scalable and extensible applications.
- Produce high-quality code utilizing Test Driven Development techniques.
- Participate in retrospectives to drive continuous improvement within the feature team.
- Participate in code reviews, ensuring all solutions align with pre-defined architectural specifications.
- Implement automation through Continuous Integration and Continuous Delivery.
- Manage cloud development and deployment, supporting applications in both private andpublic clouds.
- Formal training or certification in software engineering concepts and 1+ years of applied experience.
- Advanced knowledge of architecture, design, and business processes.
- Full Software Development Life Cycle experience within an Agile framework.
- Expert-level skills in Java, AWS, database technologies, Python, Scala, Spark/PySpark, or any ETL technology.
- Experience developing and decomposing complex SQL on RDMS platforms.
- Experience with Data Warehousing concepts (including Star Schema).
- Practical experience delivering projects in Data and Analytics, Big Data, Data Warehousing, and Business Intelligence; familiarity with relevant technological solutions and industry best practices.
- Strong understanding of data engineering challenges and proven experience with data platform engineering (batch and streaming, ingestion, storage, processing, management, integration, consumption).
- Familiarity with multiple Data & Analytics technology stacks.
- Awareness of various Data & Analytics tools and techniques (e.g., Python, data mining, predictive analytics, machine learning, data modeling, etc.).
- Experience with one or more leading cloud providers (AWS, Azure, GCP).
- Ability to work quickly and ramp up on new technologies and strategies.
- Strong collaboration skills and ability to develop meaningful relationships to achieve common goals.
- Appreciation for controls and compliance processes for applications and data.
- In-depth understanding of data technologies and solutions is preferable.
- Ability to drive process improvements and implement necessary changes.
- Knowledge of industry-wide Big Data technology trends and best practices.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×