Machine Learning Engineer; Robotics/Embodied AI
Listed on 2026-01-18
-
Engineering
Robotics, AI Engineer
San Francisco, United States | Posted on 01/14/2026
We're hiring a Machine Learning Engineer to lead our progression from teleoperation to autonomy. You'll use cutting-edge Vision-Language-Action (VLA) models and imitation learning to make our teleoperated robots increasingly autonomous over time. This role spans the full ML lifecycle: data capture and organization, model training and evaluation, deployment to edge devices, and continuous improvement based on real-world performance. You'll work at the intersection of robotics, computer vision, and foundation models—turning thousands of hours of human demonstrations into autonomous capabilities.
What we doAvatar Robotics is building flexible robot fleets to revolutionize industrial work across the country. We're on a mission to make every tedious and dangerous warehouse/factory job virtual, safe, and semi-autonomous.
With proven AI approaches and long distance teleoperation, you'll join a team that deploys a physical work solution that's scalable now, not later. We envision a world where millions of machines will make our goods and consumables more affordable and accessible than ever, while critical workers operate these robot fleets from the comfort of their homes.
At Avatar Robotics, you'll create the workforce of the future—in one of the largest markets ($1T+ manual labor market in the US alone).
We're a small but powerful team at the early innings of deploying thousands of units into facilities worldwide.
What you'll do- Design and implement data collection pipelines for synchronized sensor streams and task annotations from teleoperated robots.
- Build data processing systems for cleaning, labeling, and organizing multi-modal data (RGB-D, LiDAR, proprioceptive feedback).
- Develop and train Vision-Language-Action models, imitation learning policies, and behavior cloning systems.
- Deploy trained models to edge compute devices with real-time inference constraints.
- Collaborate with Robotics and Teleop teams to define autonomous capabilities and integrate models into the control stack.
- 3+ years of hands‑on ML experience, preferably in robotics, computer vision, or embodied AI.
- Strong foundation in deep learning frameworks (PyTorch, JAX, Tensor Flow) and training large models.
- Experience with imitation learning, behavior cloning, or reinforcement learning in physical systems.
- Proficiency with computer vision techniques (object detection, segmentation, point cloud processing).
- Understanding of robotics fundamentals (kinematics, control theory, sensor fusion).
- Strong Python skills and experience with ML libraries.
- Bonus:
Experience with Vision‑Language‑Action models, foundation models, or deploying models to edge devices.
- Our team develops on physical robots in person—expect hands‑on testing and debugging
- Need to be located or willing to relocate to San Francisco, CA
- Opportunity to shape the autonomy roadmap for a rapidly scaling robot fleet
Come join us in building a new global economy, where anyone can do manual work remotely and with robo‑scale. Together, we can make goods and resources more affordable and available than they’ve ever been, for everyone on Earth.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).