×
Register Here to Apply for Jobs or Post Jobs. X

Member of Technical Staff, Training and Inference

Job in Palo Alto, Santa Clara County, California, 94306, USA
Listing for: Boson AI
Full Time, Apprenticeship/Internship position
Listed on 2026-01-12
Job specializations:
  • Software Development
    AI Engineer, Software Engineer, Machine Learning/ ML Engineer
Salary/Wage Range or Industry Benchmark: 150000 USD Yearly USD 150000.00 YEAR
Job Description & How to Apply Below

Boson AI is an early-stage startup building large audio models for everyone to enjoy and use. Our founders (Alex Smola, Mu Li), and a team of Deep Learning, Optimization, NLP, and Statistics scientists and engineers are working on high quality generative AI models for language and beyond.

We are seeking research scientists and engineers to join our team full-time in our Santa Clara office. As part of your role, you will work on implementing and improving distributed optimization algorithms, performance tune architectures, improve inference and help us make Deep Networks perform efficiently on our cluster. The ideal candidate will possess a strong background in CUDA, Triton, PyTorch, distributed optimization and deep learning architectures.

We encourage you to apply even if you do not believe you meet every single qualification. As long as you are motivated to learn and join the development of foundation models, we’d love to chat.

Responsibilities
  • Optimize model architectures and loss objectives to handle combinations of images, video, text, speech, and audio data.
  • Implement and optimize kernels for efficient training on Hopper and Blackwell GPUs
  • Performance optimization (floating point formats, sparsity, systems level optimization)
  • Distributed optimization and training
You may be a good fit if you have
  • Experience in writing clean and efficient code;
    Master or Doctoral degree in computer science or equivalent.
  • Proficiency in at least one deep learning framework, such as PyTorch or JAX.
  • Participated in at least 1 research project related to distributed training or inference.
Strong candidates may also have
  • Experience in implementing your own kernels in CUDA or another compiler/toolkit (Triton, Thunder Kittens, PTX, etc.)
  • Experience in distributed optimization (e.g. using Deep Speed, FSDP), ideally designing performance optimizations
  • Experience in computer networking (e.g. Infiniband, using SHARP)
  • Experience in handling data at billions-scale

$150,000 - $600,000 a year

Total compensations includes base pay, equity, and benefits.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary