More jobs:
Job Description & How to Apply Below
Key Responsibilities :
Big Data Development :
Design and implement big data solutions using Spark and Scala . Develop and manage data pipelines and workflows within the big data ecosystem .
Data Processing & Analysis :
Write optimized SQL queries for data processing and analysis. Apply data transformations and business rules using Spark , Scala , and Hadoop .
Hands-on Coding :
Develop and maintain code using Java , Python , and optionally Go Lang , ensuring the efficient processing of large datasets and integrating with other platforms and services.
Data Optimization :
Tune and optimize Spark jobs for better performance, reducing computation time and enhancing resource utilization in a big data environment.
Agile Development :
Collaborate with cross-functional teams using the Agile methodology to ensure timely delivery of high-quality data solutions.
Collaboration & Reporting :
Work with business analysts and other stakeholders to gather requirements, provide technical input, and ensure alignment with business needs. Regularly update stakeholders on the progress of ongoing projects.
Testing & Troubleshooting :
Conduct unit tests, troubleshoot issues, and ensure the overall quality of the data pipelines and solutions.
Requirements :
Experience : 7+ years of hands-on experience as a Big Data Engineer , specifically with Spark , Scala , Python , and Hadoop .
Strong Coding Skills :
Proven expertise in coding with Java , Python , and Scala for building scalable and efficient data processing pipelines.
SQL Expertise :
Strong knowledge of SQL for data querying and transformations.
Big Data Ecosystem Knowledge :
Deep understanding of the big data ecosystem , including Hadoop and related tools like Hive , Pig , and HBase .
Analytical & Problem-Solving :
Strong analytical and troubleshooting skills, with the ability to find solutions to complex big data challenges.
Agile Development :
Familiarity with Agile development methodology and experience working in Agile teams.
Nice to Have :
Experience with Go Lang and familiarity with cloud-based big data platforms (AWS, GCP, Azure).
Key Technologies :
Big Data Tools : Spark , Hadoop , Hive , Pig .
Languages : Scala , Java , Python , SQL , Go Lang .
Methodology : Agile .
Operating Systems : Unix/Linux for executing commands and scripts.
Work Environment :
Work Mode : [Specify WFO/WFH/Hybrid].
Location : [Specify Location].
Vendor Rate :
[Insert vendor rate in local currency].
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×