×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer; Data Bricks

Job in 110006, Delhi, Delhi, India
Listing for: Talent Carpet
Full Time position
Listed on 2026-02-08
Job specializations:
  • IT/Tech
    Data Engineer, Data Science Manager, Big Data, Cloud Computing
Job Description & How to Apply Below
Position: Data Engineer(Data Bricks)
Job Title:

Data Engineer (Databricks)
Experience level:  5+ Years

Work Location:

Remote
Timing:  Should be open for latenight work since clients are in US

Job Summary
We are seeking a high-impact Data Engineer to architect, build, and scale enterprise-grade data platforms with a strong focus on Databricks and Apache Spark. This role requires end-to-end ownership of data engineering solutions — from design and ingestion to optimization, deployment, and production support.
The ideal candidate brings deep hands-on expertise in Databricks, PySpark, and SQL, with a proven track record of building reliable, high-performance data pipelines erience with Palantir Foundry (Contour, Code Repo, Data Lineage) is a strong plus.

Key Responsibilities

• Architect, build, and own scalable, fault-tolerant data pipelines using Python, PySpark, and SQL.

• Lead development of Databricks notebooks, Jobs, and Workflows for large-scale batch and streaming workloads.

• Design and implement Lakehouse architectures using Delta Lake, following best practices for data modeling, versioning, and performance.

• Optimize Spark jobs and Databricks clusters for performance, reliability, and scalability.

• Build and manage end-to-end ETL/ELT pipelines ingesting data from multiple internal and external sources.

• Collaborate closely with analytics, data science, and business teams to translate requirements into robust data solutions.

• Define and enforce data engineering standards, best practices, and reusable frameworks across teams.

• Implement CI/CD pipelines and version control strategies for Databricks workloads.

• Monitor production pipelines, troubleshoot failures, and drive continuous improvements in data reliability and performance.

• Create and maintain technical documentation, data flow diagrams, and operational runbooks using tools like Confluence and Jira.

Technical Skills
Must Have

• Expert-level experience with Python, PySpark, and SQL.

• Strong hands-on experience with Databricks (Workflows, Jobs, notebooks, cluster management).

• Solid understanding of Apache Spark internals and distributed data processing.

• Experience with Delta Lake, including data modeling and performance optimization.

• Exposure to Unity Catalog, data governance, and access control concepts.

• Proven ability to build and support production-grade data pipelines at scale.

• Strong analytical, debugging, and problem-solving skills.

• Excellent communication and cross-functional collaboration abilities.
Nice to Have

• Advanced experience with Delta Lake optimizations, schema evolution, and time travel.

• Experience with Palantir Foundry (Contour, Code Repo, Data Lineage).

• Familiarity with orchestration tools such as Databricks Workflows or Airflow.

• Experience in performance tuning and cost optimization within large-scale data platforms.

• Mentoring or technical leadership experience.
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary