×
Register Here to Apply for Jobs or Post Jobs. X

Lead Engineer, Inference Platform

Job in Palo Alto, Santa Clara County, California, 94306, USA
Listing for: MongoDB
Full Time position
Listed on 2026-01-25
Job specializations:
  • Software Development
    AI Engineer, Software Engineer, Data Engineer, Machine Learning/ ML Engineer
Job Description & How to Apply Below

We’re looking for a Lead Engineer, Inference Platform to join our team building the inference platform for embedding models that power semantic search, retrieval, and AI-native features across Mongo

DB Atlas.

This role is part of the broader Search and AI Platform team and involves close collaboration with AI engineers and researchers from our Voyage.ai acquisition, who are developing industry-leading embedding models. Together, we’re building the infrastructure that enables real‑time, high‑scale, and low‑latency inference — all deeply integrated into Atlas and optimized for developer experience.

As a Lead Engineer, Inference Platform, you’ll be hands‑on with design and implementation, while working with engineers across experience levels to build a robust, scalable system. The focus is on latency, availability, observability, and scalability in a multi‑tenant, cloud‑native environment. You will also be responsible for guiding the technical direction of the team, mentoring junior engineers, and ensuring the delivery of high‑quality, impactful features.

We are looking to speak to candidates who are based in Palo Alto for our hybrid working model.

What You’ll Do
  • Partner with Search Platform and Voyage.ai AI engineers and researchers to product ionize state‑of‑the‑art embedding models and rerankers, supporting both batch and real‑time inference

  • Lead key projects around performance optimization, GPU utilization, autoscaling, and observability for the inference platform

  • Design and build components of a multi‑tenant inference service that integrates with Atlas Vector Search, driving capabilities for semantic search and hybrid retrieval

  • Contribute to platform features like model versioning, safe deployment pipelines, latency‑aware routing, and model health monitoring

  • Collaborate with peers across ML, infra, and product teams to define architectural patterns and operational practices that support high availability and low latency at scale

  • Guide decisions on model serving architecture using tools like vLLM, ONNX Runtime, and container orchestration in Kubernetes

  • Provide technical leadership and mentorship to junior engineers, fostering a culture of technical excellence and continuous improvement within the team

Who You Are
  • 8+ years of engineering experience in backend systems, ML infrastructure, or scalable platform development, and the ability to provide technical leadership and guidance to a team of engineers

  • Expertise in serving embedding models in production environments

  • Strong systems skills in languages like Go, Rust, C++, or Python, and experience profiling and optimizing performance

  • Comfortable working on cloud‑native distributed systems, with a focus on latency, availability, and observability

  • Familiarity with inference runtimes and vector search systems (e.g., Faiss, HNSW, ScaNN)

  • Proven ability to collaborate across disciplines and experience levels, from ML researchers to junior engineers

  • Experience with high‑scale SaaS infrastructure, particularly in multi‑tenant environments

  • 1+ years of experience serving as TL for a large‑scale ML inference or training platform SW project

Nice to Have
  • Prior experience working with model teams on inference‑optimized architectures

  • Background in hybrid retrieval, prompt‑based pipelines, or retrieval‑augmented generation (RAG)

  • Contributions to relevant open‑source ML serving infrastructure

  • 1+ years of experience in managing a technical team focused on ML inference or training infrastructure

Why Join Us
  • Be part of shaping the future of AI‑native developer experiences on the world’s most popular developer data platform

  • Collaborate with ML experts from Voyage.ai to bring cutting‑edge research into production at scale

  • Solve hard problems in real‑time inference, model serving, and semantic retrieval — in a system used by thousands of customers worldwide

  • Work in a culture that values mentorship, autonomy, and strong technical craft

  • Competitive compensation, equity, and career growth in a hands‑on technical leadership role

About MongoDB

Mongo

DB is built for change, empowering our customers and our people to innovate at the speed of the market. We have redefined the database for the AI era,…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary