×
Register Here to Apply for Jobs or Post Jobs. X

Senior Software Development Engineer - LLM Kernel & Inference Systems

Job in Santa Clara, Santa Clara County, California, 95053, USA
Listing for: Advanced Micro Devices
Full Time position
Listed on 2026-03-01
Job specializations:
  • Software Development
    AI Engineer, Machine Learning/ ML Engineer, DevOps, Software Engineer
Salary/Wage Range or Industry Benchmark: 150000 - 200000 USD Yearly USD 150000.00 200000.00 YEAR
Job Description & How to Apply Below

WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next‑generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture.

We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.

Together, we advance your career.

THE ROLE

As a Senior Member of Technical Staff, you will be a technical leader in Large Language Model (LLM) inference and kernel optimization for AMD GPUs. You will play a critical role in advancing high‑performance LLM serving by optimizing GPU kernels, inference runtimes, and distributed execution strategies across single‑node and multi‑node systems.

This role is deeply focused on LLM inference stacks, including vLLM, SGLang, and internal inference platforms. You will work at the intersection of model architecture, GPU kernels, compiler technology, and distributed systems, collaborating closely with internal GPU library teams and upstream open‑source communities to deliver production‑grade performance improvements.

Your work will directly impact throughput, latency, scalability, and cost efficiency for state‑of‑the‑art LLMs running on AMD GPUs.

THE PERSON

You are a senior systems engineer with deep LLM domain knowledge who enjoys working close to the metal while keeping a strong understanding of end‑to‑end inference systems. You are comfortable reasoning about attention, KV cache, batching, parallelism strategies, and how they map to GPU kernels and hardware characteristics.

You thrive in ambiguous problem spaces, can independently define technical direction, and consistently deliver measurable performance gains. You balance strong execution with thoughtful upstream collaboration and maintain a high bar for software quality.

KEY RESPONSIBILITIES
  • Optimize LLM Inference Frameworks:
    Drive performance improvements in LLM inference frameworks such as vLLM, SGLang, and PyTorch for AMD GPUs, contributing both internally and upstream.
  • LLM‑Aware Kernel Development:
    Design and optimize GPU kernels critical to LLM inference, including attention, GEMMs, KV cache operations, MoE components, and memory‑bound kernels.
  • Distributed LLM Inference at Scale:
    Design, implement, and tune multi‑GPU and multi‑node inference strategies, including TP/PP/EP hybrids, continuous batching, KV cache management, and disaggregated serving.
  • Model–System Co‑Design:
    Collaborate with model and framework teams to align LLM architectures with hardware‑aware optimizations, improving real‑world inference efficiency.
  • Compiler & Runtime Optimization:
    Leverage compiler technologies (LLVM, ROCm, Triton, graph compilers) to improve kernel fusion, memory access patterns, and end‑to‑end inference pipelines.
  • End‑to‑End Inference Pipeline Optimization:
    Optimize the full inference stack—from model execution graphs and runtimes to scheduling, batching, and deployment.
  • Open‑Source Leadership:
    Engage with open‑source maintainers to upstream optimizations, influence roadmap direction, and ensure long‑term sustainability of contributions.
  • Engineering Excellence:
    Apply best practices in software engineering, including performance benchmarking, testing, debugging, and maintainability at scale.
PREFERRED EXPERIENCE
  • Good LLM Knowledge:
    Deep understanding of Large Language Model inference, including attention mechanisms, KV cache behavior, batching strategies, and latency/throughput trade‑offs.
  • LLM Inference Frameworks:
    Hands‑on experience with vLLM, SGLang, or similar inference systems (e.g., Faster Transformer), with demonstrated performance tuning.
  • GPU Kernel Development:
    Proven experience optimizing GPU kernels for deep learning workloads, particularly inference‑critical paths.
  • Distributed Inference Systems:
    Experience designing and tuning…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary