More jobs:
Job Description & How to Apply Below
Build and performance-tune distributed data processing applications, consolidating multi-source data into unified Hadoop data lakes while ensuring production-grade reliability.
Data Processing Development
Build/deploy Spark/Scala applications for distributed data processing.
Consolidate source systems into Hadoop-based data lakes.
Performance & Reliability
Optimize Spark jobs and troubleshoot Hadoop cluster issues.
Monitor production workflows and resolve pipeline failures.
Implement data modeling and ETL best practices.
Leverage Git + CI/CD for version-controlled deployments.
Must-Have Technical Skills
Hadoop ecosystem mastery (HDFS, YARN, Hive).
Spark + Scala proficiency for large-scale processing.
#J-18808-Ljbffr
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×