More jobs:
Job Description & How to Apply Below
Location:
Mumbai
Role Overview :
As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems.
Key Responsibilities:
Build scalable batch and real-time ETL pipelines using Spark and Hive
Integrate structured and unstructured data sources
Perform performance tuning and code optimization
Support orchestration and job scheduling (NiFi, Airflow)
Required education
Bachelor's Degree
Preferred education
Master's Degree
Required technical and professional expertise
Experience:
3–15 years
Proficiency in PySpark/Scala with Hive/Impala
Experience with data partitioning, bucketing, and optimization
Familiarity with Kafka, Iceberg, NiFi is a must
Knowledge of banking or financial datasets is a plus
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×