×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer - Python​/ETL

Job in 411001, Pune, Maharashtra, India
Listing for: Confidential
Full Time position
Listed on 2026-02-03
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Job Description & How to Apply Below
Description

Job Summary :

We are looking for an experienced Data Engineer with 3 to 5 years of experience to join our team and contribute to building scalable, reliable, and high-performing data pipelines.

The ideal candidate should have strong expertise in Python, PySpark, SQL, AWS Cloud (EMR, Glue, Athena), Apache Airflow, and data warehousing concepts.

You will be responsible for designing, developing, and optimizing data pipelines that enable data-driven decision-making across the organization.

Key Responsibilities

Understand business requirements and translate them into scalable data engineering solutions.
Design, develop, and maintain ETL/ELT pipelines from various sources (databases, APIs, files, streaming).
Work extensively with AWS cloud services (S3, EMR, Glue, Athena, Lake Formation) to build and optimize data workflows.
Implement workflows/orchestration using Apache Airflow or equivalent tools.
Write efficient SQL queries for data extraction, transformation, and reporting.
Work with PySpark and distributed computing frameworks to process large-scale datasets.
Apply data warehousing concepts to design and manage data models supporting analytics and reporting.
Optimize Spark jobs for performance, cost efficiency, and scalability.
Ensure data quality, reliability, and governance through validation, monitoring, and automation.
Collaborate with Analysts, and Business teams to deliver trusted data solution.

Required Skills

Programming :
Strong expertise in Python (with Pandas) and SQL.
Big Data Processing :
Hands-on experience with PySpark and Spark optimization techniques.
Cloud Platforms :
Proficiency in AWS (EMR, Glue, S3, Athena).
Workflow Orchestration :

Experience with Apache Airflow for job scheduling and automation.
Data Warehousing :
Solid understanding of data warehousing concepts, dimensional modeling, and ETL best practices.
Database Skills :

Experience with relational databases (Postgre

SQL).
Streaming & Messaging :
Understanding of Kafka for real-time data streaming and integration.
Containerization :
Knowledge of Docker for packaging and deploying data applications.
Best Practices :
Familiarity with CI/CD, version control (Git), and modern data engineering standards.

(t.tech)
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary