More jobs:
Job Description & How to Apply Below
We are seeking a highly skilled and experienced Big Data Engineer (Scala & AWS Mandatory) to join our dynamic team. The ideal candidate will have a strong background in cloud-based data engineering with at least 4 years of hands-on experience in AWS . You will be responsible for designing, building, and maintaining scalable data pipelines and solutions using modern data engineering tools and frameworks.
Key Responsibilities:
Design and implement robust, scalable, and high-performance data pipelines on AWS.
Develop and maintain ETL workflows using Airflow .
Write efficient and optimized code in Python for data processing and transformation.
Work with SQL to query, manipulate, and analyze large datasets.
Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders.
Ensure data quality, integrity, and security across all data platforms.
Monitor and troubleshoot data pipeline performance and reliability.
Mandatory
Skills:
Extensive AWS experience (minimum 4 years) – including services like S3, Lambda, Glue, Redshift, EMR, etc.
Scala – strong programming skills for data manipulation and automation.
SQL – advanced querying and optimization techniques.
Share your resume over if you have mandatory skills.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×