Engineering Manager, Deep Learning Inference
Listed on 2026-01-13
-
Software Development
Software Engineer, AI Engineer
Engineering Manager, Deep Learning Inference
Join NVIDIA as an Engineering Manager in the Deep Learning Inference team, leading a world‑class engineering group focused on the software that powers today’s most sophisticated AI systems – from large language models to multimodal generative AI – accelerated on NVIDIA GPUs.
What You'll Be Doing- Lead, mentor, and scale a high‑performing engineering team focused on deep learning inference and GPU‑accelerated software.
- Drive the strategy, roadmap, and execution of NVIDIA’s inference frameworks engineering, focusing on SGLang.
- Partner with internal compiler, libraries, and research teams to deliver end‑to‑end optimized inference pipelines across NVIDIA accelerators.
- Oversee performance tuning, profiling, and optimization of large‑scale models for LLM, multimodal, and generative AI applications.
- Guide engineers in adopting best practices for CUDA, Triton, CUTLASS, and multi‑GPU communications (NIXL, NCCL, NVSHMEM).
- Represent the team in roadmap and planning discussions, ensuring alignment with NVIDIA’s broader AI and software strategies.
- Foster a culture of technical excellence, open collaboration, and continuous innovation.
- MS, PhD, or equivalent experience in Computer Science, Electrical/Computer Engineering, or a related field.
- 6+ years of software development experience, including 3+ years in technical leadership or engineering management.
- Strong background in C/C++ software design and development; proficiency in Python is a plus.
- Hands‑on experience with GPU programming (CUDA, Triton, CUTLASS) and performance optimization.
- Proven record of deploying or optimizing deep learning models in production environments.
- Experience leading teams using Agile or collaborative software development practices.
- Significant open‑source contributions to deep learning or inference frameworks such as PyTorch, vLLM, SGLang, Triton, or Tensor
RT‑LLM. - Deep understanding of multi‑GPU communications (NIXL, NCCL, NVSHMEM) and distributed inference architectures.
- Expertise in performance modeling, profiling, and system‑level optimization across CPU and GPU platforms.
- Proven ability to mentor engineers, guide architectural decisions, and deliver complex projects with measurable impact.
- Publications, patents, or talks on LLM serving, model optimization, or GPU performance engineering.
Competitive base salary ranges of $224,000 – $356,500 (Level 3) and $272,000 – $431,250 (Level 4). Eligible for equity and a comprehensive benefits package. Salary determined by location, experience, and comparable employees.
Location & Final date to receive applicationsSanta Clara, CA. Applications accepted until January 13, 2026.
EEO StatementNVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. NVIDIA does not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.
Seniority Level &Employment Type
Mid‑Senior Level – Full‑time.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).