Machine Learning: Multimodal Foundation Models
Listed on 2026-03-02
-
Software Development
Machine Learning/ ML Engineer, AI Engineer, Software Engineer
Location
San Francisco
Employment TypeFull time
DepartmentEngineering Software
Compensation- Base $200K – $350K
Actual compensation will depend on skills, experience, and qualifications.
Base salary is one part of the total compensation package. The role is also eligible for equity through the company’s discretionary equity program, along with a comprehensive benefits package that includes medical, dental, and vision coverage, and access to a 401(k) plan.
The Bot CompanyWe're building a helpful robot for every home.
We're a small team of engineers, designers, and operators based in San Francisco. Our team comes from Tesla, Cruise, OpenAI, Google, Pixar, and many other great companies. In the past we've shipped to hundreds of millions of users and know what it takes to build amazing products and experiences.
Our team is deliberately lean to promote rapid decision making and do away with bureaucracy and hierarchy. Everyone is an IC and is empowered with massive scope, radical ownership, and direct responsibility. We work across the stack with a culture built for rapid iteration and fast execution.
What we look for in all candidatesAll roles at The Bot Company demand extreme sharpness and the ability to move fast in high-intensity environments. Throughout the process, we expect candidates to demonstrate:
- Exceptional mental acuity: you think quickly, learn instantly, and reason across unfamiliar domains.
- Engineering curiosity: you naturally dig into how systems work, even outside your specialty.
- High performance mindset: you move fast, handle ambiguity, and excel when the environment is demanding.
Multimodal Foundation Models
We are building unified foundation models that natively reason across text, image, video, and kinematics to drive intelligent robotic policies.
You will work on large multi-modal networks and own the entire stack from data to training and deploying models.
What You'll Do- Build Native Multimodal Policies:
Develop architectures where vision, language, and more modalities share a unified representation. - Improve Cross-Modal Reasoning:
Research and implement methods to ensure the model doesn't just associate modalities but actually reasons through them (e.g., grounding visual physics in kinematic constraints). - Own the Training Loop End-to-End:
Design, run, debug, and iterate on large-scale training experiments; diagnosing failure modes, improving data mixtures, and tightening evaluation to drive measurable gains. - Ship and Iterate on Real Systems:
Integrate models into real robotic stacks, build on robot code to deploy your models, and optimize performance for edge inference.
- Very strong coding skills in Python, C++, or Rust.
- Production MLLM
Experience:
Track record of training and deploying large-scale multimodal models. - Pretraining & RL Mastery:
Deep intuition for LLM-style pretraining, post-training, and Reinforcement Learning at scale. - Infrastructure Fluency:
Comfortable managing and optimizing large-scale experiments on massive GPU clusters.
You’ll work with a small, elite team on challenges that require speed, intelligence, and deep engineering instinct. If you enjoy understanding systems at all levels, move fast, and think even faster, you’ll thrive here.
Compensation Range: $200K - $350K
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).