×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Job in 243601, Gurgaon, Uttar Pradesh, India
Listing for: Confidential
Full Time position
Listed on 2026-02-03
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Job Description & How to Apply Below
Position: Data Engineer )
Location = Gurgaon Sector 32

No.of years of exp = 2 to 4 years

Budget = 20 to 25 LPA

Skills required for Data Engineer =

Python very strong skills
Kafka
AWS
Spark-based data processing jobs
CDC = change data capture
Apache Airflow
Prefect
Good at SQL
DSA = Data Structures and Algorithms
Data lake

Job brief

We are looking for a  highly skilled Data Engineer  with a strong foundation in  programming, data structures and distributed data systems.  The ideal candidate should have hands-on experience with  Python or Go,  deeply experienced in building batch and streaming pipelines using  Kafka and Spark  and comfortable working in a  cloud-native (AWS) environment.  This role involves building and optimizing scalable data pipelines that power analytics, reporting and downstream applications.

You will work closely with data scientists, BI teams and platform engineers to deliver reliable, high-performance data systems aligned with business goals.

Responsibilities

Design, build and maintain scalable batch and streaming data pipelines.
Develop real-time data ingestion and processing systems using Kafka.
Build and optimize Spark-based data processing jobs (batch and streaming).
Write high-quality, production-grade code using Python or Go.
Apply strong knowledge of data structures, algorithms and system design to solve complex data problems.
Orchestrate workflows using Apache Airflow and other open-source tools.
Ensure data quality, reliability and observability across pipelines.
Work extensively on AWS (S3, EC2, IAM, EMR / Glue / EKS or similar services).
Collaborate with analytics and BI teams to support tools like Apache Superset.
Continuously optimize pipeline performance, scalability and cost.

Requirements And Skills

Strong proficiency in Python or Go (production-level coding required).
Excellent understanding of Data Structures and Algorithms.
Hands-on experience with Apache Kafka for real-time streaming pipelines.
Strong experience with Apache Spark (batch and structured streaming).
Solid understanding of distributed systems and data processing architectures.
Proficiency in SQL and working with large-scale datasets.
Hands-on experience with Apache Airflow for pipeline orchestration.
Experience working with open-source analytics tools such as Apache Superset.
Must have relevant experience of 2-4 years.
Good To Have

Experience with data lake architectures.
Understanding of data observability, monitoring and alerting.
Exposure to ML data pipelines or feature engineering workflows.
Education

B.Tech / BE in Computer Science, Information Technology, or a related engineering discipline
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary