Senior Principal Machine Learning Engineer, vLLM Inference
Listed on 2026-01-12
-
Software Development
AI Engineer, Machine Learning/ ML Engineer, Software Engineer
Job Summary
At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open‑source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM project, and inventors of state‑of‑the‑art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments.
You would be joining the core team behind 2025's most popular open source project ((Use the "Apply for this Job" box below).) on Github.
As a Machine Learning Engineer focused on vLLM, you will be at the forefront of innovation, collaborating with our team to tackle the most pressing challenges in model performance and efficiency. Your work with machine learning and high performance computing will directly impact the development of our cutting‑edge software platform, helping to shape the future of AI deployment and utilization. If you are someone who wants to contribute to solving challenging technical problems at the forefront of deep learning in the open source way, this is the role for you.
Join us in shaping the future of AI!
What you will doWrite robust Python and C++, working on vLLM systems, high performance machine‑learning primitives, performance analysis and modeling, and numerical methods.
Contribute to the design, development, and testing of various inference optimization algorithms
Participate in technical design discussions and provide innovative solutions to complex problems
Act as a core contributor for the vLLM open‑source project: reviewing PRs, authoring RFCs, and mentoring external contributors
Mentor and guide other engineers on the team and foster a culture of continuous learning and innovation
Extensive experience in writing high performance code for GPUs and deep knowledge of GPU hardware
Strong understanding of computer architecture, parallel processing, and distributed computing concepts
Experience with tensor math libraries such as Py Torch
Deep understanding and experience in GPU performance optimizations such as ability to reason about memory bandwidth bound vs. compute bound operations
Experience optimizing kernels for deep neural networks
Experience with NVIDIA Nsight is a plus
Solid understanding of LLM Inference Optimization fundamentals:
Continuous Batching, Paged Attention, Quantization, Speculative Decoding, Tensor Parallelism, etc.Strong communications skills with both technical and non‑technical team members
Experience optimizing for non‑NVIDIA hardware (AMD ROCm, TPUs, etc) is a plus
BS, or MS in computer science or computer engineering or a related field. A PhD in a ML related domain is considered a plus
#AI-HIRING
#LI-MD2
The salary range for this position is $ - $. Actual offer will be based on your qualifications.
Pay TransparencyRed Hat determines compensation based on several factors including but not limited to job location, experience, applicable skills and training, external market value, and internal pay equity. Annual salary is one component of Red Hat’s compensation package. This position may also be eligible for bonus, commission, and/or equity. For positions with Remote‑US locations, the actual salary range for the position may differ based on location but will be commensurate with job duties and relevant work experience.
AboutRed Hat
Red Hat () is the world’s leading provider of enterprise open source () software solutions, using a community‑powered approach to deliver high‑performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in‑office, to office‑flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure.
We’re a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).