×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer

Job in 110006, Delhi, Delhi, India
Listing for: Confidential
Full Time position
Listed on 2026-02-03
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Job Description & How to Apply Below
We are seeking an experienced Senior Data Engineer with strong hands-on expertise in Databricks and AWS to design, develop, and optimize scalable data pipelines and modern data platforms. The ideal candidate will have deep experience with big data ecosystems, cloud-based data solutions (AWS), and ETL/ELT frameworks using Python, PySpark, and SQL.

Key Responsibilities

Design, build, and maintain data pipelines and ETL workflows using Databricks (PySpark, Spark SQL) and AWS services (S3, EMR, Glue, Lambda, RDS, Dynamo

DB).

Implement data lakehouse architectures integrating Delta Lake, Snowflake, and AWS storage layers.

Develop and deploy automated and scalable data ingestion frameworks to process structured and unstructured data.

Build and optimize data models (Star/Snowflake schemas) for analytical and reporting use cases.

Orchestrate workflows using Airflow or similar scheduling tools; manage job dependencies and monitoring.

Collaborate with analytics, data science, and business teams to deliver high-quality, reliable, and well-documented data assets.

Ensure data quality, governance, and security best practices are maintained throughout pipelines.

Troubleshoot and tune performance for Spark clusters and SQL queries to optimize cost and efficiency.

Integrate with modern tools such as DBT, Kubernetes, and CI/CD pipelines for continuous delivery of data solutions.

Required Skills & Experience

10+ years of experience in data engineering, with proven work on large-scale distributed data systems.

Strong hands-on experience with Databricks (PySpark, Spark SQL, Delta Lake).

Expertise in AWS Cloud Services: EMR, S3, Glue, Lambda, RDS, EC2, Cloud Watch, IAM, etc.

Proficiency in Python and SQL for data transformation and automation.

Experience with Snowflake (SnowSQL, Snowpipe, schema design, and optimization).

Solid understanding of data warehousing concepts (ETL/ELT, dimensional modeling, data partitioning).

Experience with Airflow for scheduling and monitoring data pipelines.

Familiarity with CI/CD tools (Jenkins, Git, Docker, Kubernetes) for automated deployments.

Working knowledge of No

SQL databases (Mongo

DB, Dynamo

DB, Cassandra).

Excellent debugging, problem-solving, and performance-tuning skills.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary