Machine Learning Scientist - Source Lead
Listed on 2026-01-23
-
IT/Tech
Data Scientist, Machine Learning/ ML Engineer, AI Engineer, Artificial Intelligence -
Research/Development
Data Scientist, Artificial Intelligence
Join to apply for the Machine Learning Scientist - Open Source Lead role at LMArena
Be among the first 25 applicants
About LMArenaLMArena is the open platform for evaluating how AI models perform in the real world. Created by researchers from UC Berkeley’s Sky Lab, our mission is to measure and advance the frontier of AI for real-world use. Millions of people use LMArena each month to explore how frontier systems perform—and we use our community’s feedback to build transparent, rigorous, and human-centered model evaluations.
Leading enterprises and AI labs rely on our evaluations to understand real-world reliability, alignment, and impact. Our leaderboards are the gold standard for AI performance—trusted by leaders across the AI community and shaping the global conversation on model reliability and progress. We’re a team of researchers, engineers, academics, and builders from places like UC Berkeley, Google, Stanford, Deep Mind, and Discord.
We seek truth, move fast, and value craftsmanship, curiosity, and impact over hierarchy. We’re building a company where thoughtful, curious people from all backgrounds can do their best work. Everyone on our team is a deep expert in their field—our office radiates excellence, energy, and focus.
The Role
LMArena is looking for a Machine Learning Scientist to lead our open‑source research, including open data set and code releases, advancing how the world evaluates and understands AI models in the open. You’ll design, run, and share new methods and experiments that reveal what makes models useful, trustworthy, and capable, grounded in human preference signals and released openly for the full ecosystem and research community to build upon.
In this role, you’ll be responsible for taking our commitment to openness from principle to practice, curating high‑impact datasets, developing new methodology and reproducible benchmarks, and releasing code that enables the research ecosystem to push AI evaluations forward. Your work will shape the public leader board, power community tools, and strengthen transparency in AI evaluation worldwide. This role is deeply interdisciplinary, working with engineers, product teams, marketing, and the broader research community to advance how we compare models, analyze preference data, and understand factors like style, reasoning, and robustness.
You’ll work closely with GTM teams as our spokesperson when it comes to outreach for our open research efforts: strengthening research partnerships, expanding research community participation, and championing programs that grow and support our research network.
- Design and conduct experiments to evaluate AI model behavior across reasoning, style, robustness, and user preference dimensions
- Develop new metrics, methodologies, and evaluation protocols that go beyond traditional benchmarks
- Analyze large‑scale human voting and interaction data to uncover insights into model performance and user preferences
- Communicate results with the broader research community via academic papers, educational content, conference talks
- Collaborate with engineers to implement and scale research findings into production systems
- Prototype and test research ideas rapidly, balancing rigor with iteration speed
- Partner with model providers to shape evaluation questions and support responsible model testing
- Contribute to the scientific integrity and transparency of the LMArena leader board and tools
- PhD or equivalent research experience in Machine Learning, Natural Language Processing, Statistics, or a related field
- Uses personal and professional platforms to amplify open research initiatives and invite collaboration
- Strong understanding of LLMs and modern deep learning architectures (e.g., Transformers, diffusion models, reinforcement learning with human feedback); proficiency in Python and ML research libraries such as PyTorch, JAX, or Tensor Flow
- Demonstrated ability to design and analyze experiments with statistical rigor
- Experience publishing research or working on open‑source projects in ML, NLP, or AI evaluation
- Comfortable working with real‑world usage data and designing metrics beyond standard…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).