More jobs:
Job Description & How to Apply Below
Data Engineer – AWS, Python
Role
Description:
We are seeking an experienced ETL/Data Engineer with strong cloud expertise to design, develop, and maintain data pipelines. The ideal candidate will have hands-on experience with AWS services, large-scale data processing, and deployment pipelines.
Responsibilities:
- Design and implement data pipelines on cloud platforms (AWS preferred).
- Handle large volumes of data from multiple sources.
- Perform data cleansing, data validation, and transformation.
- Hands-on ETL development using Python and SQL.
- Utilize AWS services such as Glue, Glue Crawlers, Lambda, Redshift, Athena, S3, EC2, IAM.
- Implement monitoring and logging mechanisms using AWS Cloud Watch and set up alerts.
- Deploy solutions on the cloud.
- Integrate CI/CD pipelines to build artifacts and deploy changes across environments.
- Work with scheduling frameworks like Airflow and AWS Step Functions to manage workflows.
- Communicate effectively with stakeholders and work collaboratively with cross-functional teams.
Skills Required:
- Cloud Computing: Amazon Web Services (AWS)
- Databases: Microsoft SQL Server 2019
- Big Data & Analytics: Py Spark
- Programming: Python, SQL
- Workflow Orchestration: Airflow, AWS Step Functions
- Monitoring & Logging: AWS Cloud Watch
Experience: 6–8 years
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×