More jobs:
Job Description & How to Apply Below
We are looking for a hands‑on Databricks, Data Ops Engineer(7-10 years of experience) who can operate across both data engineering and platform operations. This role requires deep expertise in Databricks along with strong Data Ops practices to ensure scalable, reliable, and automated data pipelines. The ideal candidate is comfortable building complex workloads and managing the underlying platform components.
Key Responsibilities- Develop, optimize, and maintain data pipelines using Databricks (PySpark, SQL, Delta Lake).
- Implement Data Ops best practices including CI/CD, automated testing, version control, and environment management.
- Manage Databricks platform operations: clusters, jobs, permissions, workspace configuration, and performance tuning.
- Build monitoring and observability for pipelines, data quality, and platform health.
- Troubleshoot production issues across both pipeline logic and platform infrastructure.
- Collaborate with data engineers, platform teams, and Dev Ops to streamline deployments and improve reliability.
- Contribute to metadata‑driven frameworks, governance standards, and operational automation.
- Strong hands‑on experience with Databricks (notebooks, jobs, clusters, Delta Lake).
- Solid understanding of Data Ops concepts: automation, CI/CD, testing, governance.
- Proficiency in PySpark, SQL, and distributed data processing.
- Experience with Dev Ops features.
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×