Senior Machine Learning Research Engineer
Listed on 2026-02-28
-
Engineering
Robotics, AI Engineer
Zendar is looking for a Senior Machine Learning Research Engineer (Multi-Sensor Fusion Perception & Foundation Models) to join our Berkeley office. Zendar develops one of the best 360-degree radar-based vehicular perception systems for automotive. We’re now expanding our capabilities to deliver full-scene perception outputs using early fusion of camera and radar
, and scaling these technologies across both the automotive and robotics industries. We are not bogged down by legacy systems, and by joining us you’ll have the opportunity to define and own a next-generation perception stack that enables reliable autonomy at scale.
Zendar is building perception for physical AI—giving engineers a strong foundation for creating world-class robotics applications.
Zendar pioneered RF perception that delivers a vision-like, semantically segmented understanding of the environment—running on embedded automotive systems using only radar data. This RF perception forms the backbone of Zendar’s next-generation foundation models, which are built around early fusion of RF and vision data.
This architecture inverts the traditional perception stack. Instead of treating RF signals as secondary, Zendar’s models combine vision’s high angular resolution with RF’s strong temporal and spatial understanding at the earliest stages of perception. The result is a system that sees farther, remains robust to occlusion and adverse weather, and operates far more efficiently than vision-only or lidar-based approaches.
See a demo of Zendar’s foundational RF perception
At Zendar, you’ll work at the cutting edge of autonomous mobility and robotics—advancing foundation models that will power the next generation of physical AI systems.
You’ll work with large-scale, real-world, multi-modal datasets composed of synchronized and calibrated radar, camera, and lidar data collected across multiple continents.
Our team brings together deep expertise across hardware, signal processing, machine learning, and software engineering, with decades of experience in sensing and perception.
We are a global team with offices in Berkeley, Lindau (Germany), and Paris (France). Zendar is well-funded by leading Tier-1 venture capital firms and has established strong industry partnerships.
Although AI is central to what we build, our hiring process is intentionally human: every résumé is reviewed by a real person.
Your Role:Zendar’s Semantic Spectrum perception technology extracts a rich scene understanding from radar sensing. Our next goal is to build a foundation-model-driven perception stack that fuses streaming camera and radar to produce full perception outputs that are robust enough for real-world autonomy: occupancy/free-space (e.g., occupancy grid), object detection and tracking, lane line and road structure estimation, and the interfaces required to make these outputs actionable for downstream systems.
We are seeking an experienced Senior ML Engineer to design, implement, and drive the architecture of these models end-to-end, including training from scratch on large-scale datasets (not just fine-tuning), defining evaluation and long-tail validation, and partnering with platform and product teams to ensure successful deployment in real-time systems. This is an ideal position for an engineer who enjoys owning hard technical problems, making rigorous tradeoffs, and building systems that work reliably in the messy long tail of the real world.
In this role you will have close communication and collaboration with platform, embedded, and robotics teams. You will work with our real-world dataset of tens of thousands of kilometers collected in multiple continents and geographies, and you will have opportunities to validate results on real vehicles.
Own architecture and technical strategy for multi-sensor perception models, including explicit tradeoffs (why approach A vs
B), risks, validation plans, and timelines.
Build foundation-scale / transformer-based perception models trained from scratch on large-scale multi-modal driving datasets (not limited to fine-tuning).
Develop fusion architectures for streaming multi-sensor inputs…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).