Member of Technical Staff - ML Research Engineer, Optimization
Listed on 2026-01-18
-
Software Development
AI Engineer
About Liquid AI
Spun out of MIT CSAIL, we build AI systems that run where others stall: on CPUs, with low latency, minimal memory, and maximum reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The OpportunityOur Liquid Foundation Models are designed for efficient edge deployment. Building them requires high-performance GPU infrastructure for training and post-training workflows. We need someone who can write custom CUDA kernels, profile at the hardware level, and turn research techniques into real-world speedups. Our team is small, fast-moving, and high-ownership. We're looking for someone who finds joy in memory hierarchies, tensor cores, and profiler output.
While San Francisco and Boston are preferred, we are open to other locations.
What We're Looking ForWe need someone who:
Lives in the profiler: Our team gets excited about finding the 10% improvement hiding in a memory access pattern. We use Nsight, CUDA profilers, and low-level debugging daily.
Bridges theory and practice: We draw on research papers and implement their techniques as production-grade kernels that deliver the promised speedups.
Executes independently: We have strong researchers. We need someone our team can point at optimization problems and trust to deliver.
Cares about details: Memory hierarchy, compute vs. memory-bound workloads, tensor core utilization. These are the levers we pull every day.
Write high-performance GPU kernels for our novel model architectures
Profile and optimize training pipelines to eliminate bottlenecks
Support post-training workflows with fast GPU inference for RL and data generation
Implement research techniques as production-ready kernel optimizations
Integrate optimized kernels into PyTorch/Triton pipelines
Must-have:
Hands-on experience writing custom GPU kernels (CUDA, not just cuDNN)
Deep understanding of memory hierarchy and compute/memory-bound optimization
Experience with low-level profiling tools (Nsight, CUDA profilers)
Strong C/C++ skills
Nice-to-have:
CUTLASS experience
Fine-grain optimizations targeting tensor cores
PyTorch/Triton integration experience
Training throughput or inference latency has improved by a measurable margin on at least one critical pipeline
At least one research technique has shipped as a production kernel that delivered on its theoretical promise
Post-training workflows (RL, data generation) are no longer bottlenecked on GPU inference
Unique challenges: Our efficiency-focused models have architectural innovations that generic frameworks can't optimize. This is not just scale-up work.
Compensation: Competitive base salary with equity in a unicorn-stage company
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
Financial: 401(k) matching up to 4% of base pay
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: