More jobs:
Job Description & How to Apply Below
Experience:
5–6 Years
Location:
Mumbai (Onsite)
Job Description
We are hiring an ETL Specialist with strong skills in PySpark, Python, SQL, Hive, Kafka, and GCP Big Query . The candidate will build and maintain scalable ETL/ELT pipelines, manage big data processing, and work with streaming and batch workloads.
Key Responsibilities
Develop ETL/ELT pipelines using PySpark, Python, SQL .
Handle data ingestion/transformation from Hive, Kafka, GCP .
Orchestrate pipelines using Apache Airflow .
Work on large-scale big data systems.
Implement data quality checks and monitoring.
Collaborate with data/analytics teams for business use cases.
Optimize workflows for performance and cost.
Ensure data governance and security compliance.
Required Skills
Strong experience in Python, PySpark, SQL, Hive .
Hands-on with Kafka , Airflow , GCP (Big Query, Dataflow, Dataproc) .
Knowledge of Docker, Kubernetes, Git, CI/CD .
Good understanding of data modeling , structured/unstructured data.
Experience with data catalog/governance tools.
Soft Skills
Strong analytical/problem-solving skills.
Good communication & teamwork.
Ability to work in Agile environments.
High ownership and attention to detail.
If Intrested. Please submit your CV to [HIDDEN TEXT] or share it via Whats App at
Stay updated with our latest job opportunities and company news by following us on Linked In: :
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×