×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Senior Software Engineer - Data Platform Bengaluru, India

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Databricks Inc.
Full Time position
Listed on 2026-03-01
Job specializations:
  • Software Development
    Data Engineer
Salary/Wage Range or Industry Benchmark: 125000 - 150000 USD Yearly USD 125000.00 150000.00 YEAR
Job Description & How to Apply Below

At Databricks, we are passionate about enabling data teams to solve the world's toughest problems, from security threat detection to cancer drug development. We do this by building and running the world's best data and AI infrastructure platform, so our customers can focus on the high value challenges that are central to their own missions. Our engineering teams build technical products that fulfill real, important needs in the world.

We always push the boundaries of data and AI technology, while simultaneously operating with the resilience, security and scale that is important to making customers successful on our platform.

We develop and operate one of the largest scale software platforms. The fleet consists of millions of virtual machines, generating terabytes of logs and processing exabytes of data per day. At our scale, we observe cloud hardware, network, and operating system faults, and our software must gracefully shield our customers from any of the above.

As a Senior Software Engineer working on the Data Platform team you will help build the Data Intelligence Platform for Databricks that will allow us to automate decision‑making across the entire company. You will achieve this in collaboration with Databricks Product Teams, Data Science, Applied AI and many more. You will develop a variety of tools spanning logging, orchestration, data transformation, metric store, governance platforms, data consumption layers etc.

You will do this using the latest, bleeding‑edge Databricks product and other tools in the data ecosystem - the team also functions as a large, production, in‑house customer that dog foods Databricks and guides the future direction of the product.

The impact you will have:
  • Design and run the Databricks metrics store that enables all business units and engineering teams to bring their detailed metrics into a common platform for sharing and aggregation, with high quality, introspection ability and query performance.
  • Design and run the cross‑company Data Intelligence Platform, which contains every business and product metric used to run Databricks. You’ll play a key role in developing the right balance of data protections and ease of shareability for the Data Intelligence Platform as we transition to a public company.
  • Develop tooling and infrastructure to efficiently manage and run Databricks on Databricks at scale, across multiple clouds, geographies and deployment types. This includes CI/CD processes, test frameworks for pipelines and data quality, and infrastructure-as-code tooling.
  • Design the base ETL framework used by all pipelines developed at the company.
  • Partner with our engineering teams to provide leadership in developing the long‑term vision and requirements for the Databricks product.
  • Build reliable data pipelines and solve data problems using Databricks, our partner’s products and other OSS tools. Provide early feedback on the design and operations of these products.
  • Establish conventions and create new APIs for telemetry, debug, feature and audit event log data, and evolve them as the product and underlying services change.
  • Represent Databricks at academic and industrial conferences & events.
What we look for:
  • 6+ years of industry experience
  • 4+ years of experience providing technical leadership on large projects similar to the ones described above - ETL frameworks, metrics stores, infrastructure management, data security.
  • Experience building, shipping and operating reliable multi‑geo data pipelines at scale.
  • Experience working with and operating workflow or orchestration frameworks, including open source tools like Airflow and DBT or commercial enterprise tools.
  • Experience with large‑scale messaging systems like Kafka or Rabbit

    MQ or commercial systems.
  • Excellent cross‑functional and communication skills, consensus builder.
  • Passion for data infrastructure and for enabling others by making their data easier to access.
#J-18808-Ljbffr
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary