More jobs:
Job Description & How to Apply Below
Hadoop Developer
Experience
Required:
3–5 years
Location
- Mumbai
About the Role:
We are seeking a skilled Hadoop Developer with 3–5 years of experience to join our data engineering team in Thane. The ideal candidate will have hands-on expertise in building, maintaining, and optimizing large-scale data processing systems using the Hadoop ecosystem.
Key Responsibilities:
Design, develop, and maintain scalable big data applications using the Hadoop ecosystem.
Work on data ingestion, transformation, and processing using tools like Hive, Pig, Spark, Sqoop, and Flume.
Implement ETL pipelines and optimize data workflows for performance and scalability.
Collaborate with data scientists, BI teams, and analysts to support business needs.
Ensure data quality, security, and governance within the Hadoop cluster.
Monitor, troubleshoot, and improve existing Hadoop jobs and data pipelines.
Requirements
Required Skills &
Qualifications:
3–5 years of hands-on experience in Hadoop ecosystem tools (HDFS, Hive, Pig, Sqoop, Flume, Spark, Oozie).
Strong programming skills in Java, Python, or Scala.
Experience with SQL and working with large datasets.
Knowledge of ETL concepts and data integration techniques.
Experience with real-time data streaming frameworks (Kafka, Spark Streaming) is a plus.
Familiarity with cloud platforms (AWS, Azure, GCP) for big data solutions is desirable.
Good problem-solving and communication skills.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×