×
Register Here to Apply for Jobs or Post Jobs. X

AI Systems & Inference Frameworks Engineer

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Adaption Labs
Full Time position
Listed on 2026-01-17
Job specializations:
  • Software Development
    AI Engineer, Machine Learning/ ML Engineer
Job Description & How to Apply Below

AI Systems & Inference Frameworks Engineer

Location Type:
Hybrid

Employment Type:

Full Time
Department:
Platform

About us

Most AI is frozen in place - it doesn't adapt to the world. We think that’s backwards. Our mandate is to build efficient intelligence that evolves in real-time. Our vision is AI systems that are flexible, personalized, and accessible to everyone. We believe efficiency is what makes this possible - it's how we expand access and ensure innovation benefits the many, not the few.

We believe in talent density: bringing together the best and most driven individuals to push the boundaries of continual adaptation. We’re looking for builders and creative thinkers ready to shape the next era of intelligence.

The Role

You’ll work directly with our founders to design and build the inference and optimization systems that power our core product. This role bridges research and production, combining deep exploration of inference techniques with hands‑on ownership of scalable, high‑performance serving infrastructure. You’ll own the full lifecycle of LLM inference—from experimentation and performance analysis to deployment and iteration in production—thriving in a zero‑to‑one environment and helping define the technical foundations of our inference stack.

Responsibilities
  • Inference Research & Systems: design and build our LLM inference stack from zero to one, exploring and implementing advanced techniques for low‑latency, high‑throughput serving of language and multimodal models.
  • Frameworks & Optimization: develop and optimize inference using modern frameworks (e.g., vLLM, SGLang, Tensor

    RT-LLM), experimenting with batching strategies, KV‑cache management, parallelism, and GPU utilization to push performance and cost efficiency.
  • Software–Hardware Co‑Design: collaborate closely with founders and model developers to analyze bottlenecks across the stack, co‑optimizing model execution, infrastructure, and deployment pipelines.
Qualifications
  • Strong experience building and optimizing LLM inference systems in production or research environments
  • Hands‑on expertise with inference frameworks such as vLLM, SGLang, Tensor

    RT-LLM, or similar
  • Deep performance mindset with experience in GPU‑backed systems, latency/throughput optimization, and resource efficiency
  • Solid understanding of transformer inference, serving architectures, and KV‑cache–based execution
  • Strong programming skills in Python; experience with CUDA, Triton, or C++ a plus
  • Comfort working in ambiguous, zero‑to‑one environments and driving research ideas into production systems
  • Nice to have: experience with model quantization or pruning, speculative decoding, multimodal inference, open‑source contributions, or prior work in systems or ML research labs
  • Above all, we’re looking for great teammates who make work feel lighter and aren’t afraid to go out on a limb with bold ideas. You don’t need to be perfect, but you do need to be adaptable. We encourage you to apply, even if you don’t check every ziekenbox.
Benefits
  • Flexible work: In‑person collaboration in the Bay Area, distributed global‑first team, and quarterly offsites.
  • Adaption Passport: Annual travel stipend to explore a country you’ve never visited. We’re building intelligence that evolves alongside you, so we encourage you to keep expanding your horizons.
  • Lunch Stipend: Weekly meal allowance for take‑out or grocery delivery.
  • Well‑Being: Comprehensive medical benefits and generous paid time off.
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary