More jobs:
Python Developer
Job in
Irving, Dallas County, Texas, 75084, USA
Listed on 2026-03-06
Listing for:
360 Technology
Full Time
position Listed on 2026-03-06
Job specializations:
-
Software Development
Data Engineer, Python
Job Description & How to Apply Below
We are seeking a highly experienced Senior Python Developer with deep expertise in PySpark and distributed data processing to lead and execute the migration of complex legacy data processing systems to scalable Python and PySpark-based services. The role requires strong banking domain knowledge and the ability to work onsite with business and technical stakeholders to modernize legacy data and processing platforms.
Key Responsibilities- Analyze and understand complex legacy data processing logic used in banking systems.
- Design and implement scalable Python and PySpark-based data processing solutions using industry best practices.
- Migrate business-critical logic related to banking operations such as payments, accounts, transactions, risk, or reporting into distributed data pipelines.
- Develop batch and large-scale data processing jobs using Py Spark .
- Ensure functional parity, performance optimization, and data integrity during migration.
- Optimize data transformations and refactor procedural logic into modular, scalable PySpark jobs.
- Collaborate onsite with data engineers, architects, business analysts, and QA teams.
- Perform unit testing, integration testing, and data validation post-migration.
- Document migration approaches, code logic, and technical designs.
Skills & Qualifications
- 10+ years of overall software development experience.
- Strong hands-on experience with Python (3.x).
- Extensive experience with PySpark and distributed data processing frameworks
. - Strong understanding of Spark architecture (RDD, Data Frames, Spark SQL).
- Experience building and optimizing large-scale ETL/data pipelines.
- Strong SQL knowledge and query optimization skills.
- Experience migrating legacy data processing systems to Python/PySpark-based pipelines.
- Experience working with:
- REST APIs
- Object-oriented and functional programming in Python
- 10+ years of overall software development experience.
- Strong hands-on experience with Python (3.x).
- Extensive experience with PySpark and distributed data processing frameworks
. - Strong understanding of Spark architecture (RDD, Data Frames, Spark SQL).
- Experience building and optimizing large-scale ETL/data pipelines.
- Strong SQL knowledge and query optimization skills.
- Experience migrating legacy data processing systems to Python/PySpark-based pipelines.
- Experience working with:
- REST APIs
- Object-oriented and functional programming in Python
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×