×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer_AVP

Job in 110006, Delhi, Delhi, India
Listing for: Ujjivan Small Finance Bank
Full Time position
Listed on 2026-02-11
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst, Data Science Manager, Data Warehousing
Job Description & How to Apply Below
Position: Senior Data Engineer_AVP-I
JOB DESCRIPTION

The Senior Data Engineer will oversee the department's data integration work, including developing a data model, maintaining a data warehouse and analytics environment and writing scripts for data integration and analysis
This role will work closely and collaboratively with members of Data Science & Decision Management and IT teams to define requirements, mine and analyze data, integrate data from a variety of sources, and deploy high quality data pipelines in support of the analytics needs of the Bank
They will also create and oversee an automated reporting system, data visualization applications and manage other internal systems / process
Deliver end to end, the projects in the data and MIS domain. He/she will collaborate with senior business leaders to determine and prioritize immediate requirements and anticipate future data pipelines and architecture, MIS and critical ad-hoc requests to ensure high quality delivery
Role not only covers the complete data value chain but also to provides tools, services and data analytics to enable new use cases, business capabilities and directly creates value for Bank

KEY RESPONSIBILITIES OF

THE ROLE

Plan and build efficient data platforms, systems and infrastructure for intelligent engines which empower business teams
Lead a team of data engineers to design, develop, and maintain data systems & pipelines independently to enable faster business analysis
Assess common patterns and problems to create standardized solutions by centrally building services and technology frameworks for MIS & analytics
Build algorithms and develop optimal queries to combine raw information from different sources
Partner with service / backend engineers to integrate data provided by legacy IT solutions into our designed databases and make it accessible to the internal verticals consuming those data.
Focus on the design and setup of databases, data models, data transformations (ETL), critical online banking business processes
Contribute to data harmonization as well as data cleansing
Conduct complex data analysis to ensure data quality & reliability
Solve complex SQL and Big Data performance challenges
Utilize Python / PySpark / Scala or Java to analyze large-scale datasets in batch and real-time
Build reports and data visualizations, using data from the data warehouse and other sources
Produce scalable, replicable code and engineering solutions that help automate repetitive data management tasks
Define and create project plans for BI delivery
Evaluate and balance trade-offs between project size and complexity, cost, urgency, risk, and stakeholder value
Coordinate the project resources to ensure that projects are delivered on time and within budget
Partner with business partners to elicit, analyse, and document business requirements and translate business requirements into BI requirements
Work with the data scientists to understand the data requirements and create re-usable data assets to enable data scientists to build and deploy machine learning models faster
Partner with business partners to conduct user acceptance testing
Based on engagement, conduct meetings with vendor for technology evaluation, cost negotiations, assess vendor capability & project progress reviews
Manage on-site staff of vendor

MINIMUM REQUIREMENTS OF KNOWLEDGE & SKILLS
Educational
Qualifications
Degree in Computer Science, Software Engineering, Information Systems, Mathematics or other STEM majors
Strong professional experience and expert knowledge of database systems and database structures (structured and unstructured data)
Broad knowledge on mathematical models / statistics
Knowledge on design principles of ETL processes in the DWH environment / data lake and how to automate them
Professional software development experience (one or more) with Scala, Spark, Hadoop, Java, Linux, SQL other BI related languages like Python
Experience in the big data ecosystem, with Hadoop (Pig, Hive, HDFS), Apache Spark and No

SQL/SQL databases
Experience using ETL big data pipelines (Apache Airflow)

Experience(Years and Core Experience Type)
8+ years of experience in processing of the large amount of data, big data (Hadoop, Hive, Sqoop &…
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary