More jobs:
Job Description & How to Apply Below
- Optimize enterprise data platforms with Spark + Hadoop.
- Build and performance-tune distributed data processing applications, consolidating multi-source data into unified Hadoop data lakes while ensuring production-grade reliability.
- Build/deploy Spark/Scala applications for distributed data processing.
- Consolidate source systems into Hadoop-based data lakes.
- Optimize Spark jobs and troubleshoot Hadoop cluster issues.
- Monitor production workflows and resolve pipeline failures.
- Implement data modeling and ETL best practices.
- Leverage Git + CI/CD for version-controlled deployments.
- Hadoop ecosystem mastery (HDFS, YARN, Hive).
- Spark + Scala proficiency for large-scale processing.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×