×
Register Here to Apply for Jobs or Post Jobs. X

Databricks Developer

Job in 201301, Noida, Uttar Pradesh, India
Listing for: Confidential
Full Time position
Listed on 2026-02-03
Job specializations:
  • IT/Tech
    Data Engineer, Big Data
Salary/Wage Range or Industry Benchmark: 800000 INR Yearly INR 800000.00 YEAR
Job Description & How to Apply Below
We are seeking an experienced Databricks Developer / Data Engineer to design, develop, and optimize data pipelines, ETL workflows, and big data solutions using Databricks. The ideal candidate should have expertise in Apache Spark, PySpark, SQL, and cloud-based data platforms (Azure, AWS, GCP). This role involves working with large-scale datasets, data lakes, and data warehouses to drive business intelligence and analytics.

Key Responsibilities:

Design, build, and optimize ETL and ELT pipelines using Databricks and Apache Spark
Work with big data processing frameworks (PySpark, Scala, SQL) for data transformation and analytics
Implement Delta Lake architecture for data reliability, ACID transactions, and schema evolution
Integrate Databricks with cloud services like Azure Data Lake, AWS S3, GCP Big Query, and Snowflake
Develop and maintain data models, data lakes, and data warehouse solutions
Optimize Spark performance tuning, job scheduling, and cluster configurations
Work with Azure Synapse, AWS Glue, or GCP Dataflow to enable seamless data integration
Implement CI/CD automation for data pipelines using Azure Dev Ops, Git Hub Actions, or Jenkins
Perform data quality checks, validation, and governance using Databricks Unity Catalog
Collaborate with data scientists, analysts, and business teams to support analytics and AI/ML models

Required Skills &

Qualifications:

6+ years of experience in data engineering and big data technologies
Strong expertise in Databricks, Apache Spark, and PySpark/Scala
Hands-on experience with SQL, No

SQL, and structured/unstructured data processing

Experience with cloud platforms (Azure, AWS, GCP) and their data services
Proficiency in Python, SQL, and Spark optimizations

Experience with Delta Lake, Lakehouse architecture, and metadata management
Strong understanding of ETL/ELT processes, data lakes, and warehousing concepts

Experience with streaming data processing (Kafka, Event Hubs, Kinesis, etc)
Knowledge of security best practices, role-based access control (RBAC), and compliance
Experience in Agile methodologies and working in cross-functional teams

Preferred Qualifications:

Databricks Certifications (Databricks Certified Data Engineer Associate/Professional)

Experience with Machine Learning and AI/ML pipelines on Databricks
Hands-on experience with Terraform, Cloud Formation, or Infrastructure as Code (IaC)
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary