×
Register Here to Apply for Jobs or Post Jobs. X

Software Engineer, Model Serving

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Databricks Inc.
Full Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    AI Engineer, Data Engineer, Systems Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 192000 - 260000 USD Yearly USD 192000.00 260000.00 YEAR
Job Description & How to Apply Below
Position: Staff Software Engineer, Model Serving

At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business.

Databricks’ Model Serving product provides enterprises with a unified, scalable, and governed platform to deploy and manage AI/ML models — from traditional ML to fine‑tuned and proprietary large language models. It offers real‑time, low‑latency inference, governance, monitoring, and lineage. As AI adoption accelerates, Model Serving is a core pillar of the Databricks platform, enabling customers to operationalize models at scale with strong SLAs and cost efficiency.

As a Staff Engineer, you’ll play a critical role in shaping both the product experience and the foundational infrastructure of Model Serving. You will design and build systems that enable high‑throughput, low‑latency inference across CPU and GPU workloads, influence architectural direction, and collaborate closely across platform, product, infrastructure, and research teams to deliver a world‑class serving platform.

The impact you will have
  • Design and implement core systems and APIs that power Databricks Model Serving, ensuring scalability, reliability, and operational excellence.
  • Partner with product and engineering leadership to define the technical roadmap and long‑term architecture for serving workloads.
  • Drive architectural decisions and trade‑offs to optimize performance, throughput, autoscaling, and operational efficiency for CPU and GPU serving workloads.
  • Contribute directly to key components across the serving infrastructure — from model container builds and deployment workflows to runtime systems like routing, caching, observability, and intelligent autoscaling — ensuring smooth and efficient operations at scale.
  • Collaborate cross‑functionally with product, platform, and research teams to translate customer needs into reliable and performant systems.
  • Lead technical initiatives that improve latency, availability, and cost‑effectiveness across both customer‑facing and foundational serving layers.
  • Establish best practices for code quality, testing, and operational readiness, and mentor other engineers through design reviews and technical guidance.
  • Represent the team in cross‑organizational technical discussions and influence Databricks’ broader AI platform strategy.
What we look for
  • 10+ years of experience building and operating large‑scale distributed systems.
  • Deep expertise in model serving, inference systems, and related infrastructure (e.g., routing, scheduling, autoscaling, and observability).
  • Strong foundation in algorithms, data structures, and system design as applied to large‑scale, low‑latency serving systems.
  • Proven ability to deliver technically complex, high‑impact initiatives that create measurable customer or business value.
  • Experience leading architecture for large‑scale, performance‑sensitive CPU/GPU inference systems.
  • Strong communication skills and ability to collaborate across teams in fast‑moving environments.
  • Strategic and product‑oriented mindset with the ability to align technical execution with long‑term vision.
  • Passion for mentoring, growing engineers, and fostering technical excellence.
Pay Range Transparency

Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non‑commissionable roles or on‑target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job‑related skills, depth of experience, relevant certifications and training, and specific work location.

Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above.

Local Pay Range $192,000 — $260,000 USD

About Databricks

Dat…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary