×
Register Here to Apply for Jobs or Post Jobs. X

Member of Technical Staff, Model Evaluation

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Inception
Full Time position
Listed on 2026-03-14
Job specializations:
  • Software Development
    Machine Learning/ ML Engineer, AI Engineer, Data Scientist
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below

The Role

We seek experienced engineers and scientists to develop the evaluation metrics and systems that drive frontier LLM performance. You'll design the frameworks that tell us whether our models are improving and ensure they perform reliably at scale in production.

Key Responsibilities
  • Design, develop, and maintain robust evaluation frameworks and benchmarks for measuring LLM performance across diverse tasks and domains.
  • Define and implement quantitative metrics that capture model quality, safety, reliability, and regression detection.
  • Build scalable, automated evaluation pipelines that integrate into model training and deployment workflows.
  • Conduct rigorous statistical analysis of model outputs to identify failure modes, biases, and performance gaps.
  • Partner with product and customer-facing teams to translate real-world use cases into meaningful evaluation criteria.
Qualifications
  • BS/MS/PhD in Computer Science, Machine Learning, Statistics, or a related field (or equivalent experience).
  • At least 2 years of experience in ML evaluation, applied ML research, or a related engineering role.
  • Strong understanding of LLM fundamentals (autoregressive generation, instruction tuning, RLHF, in-context learning, decoding strategies).
  • Proficiency in Python and ML frameworks such as PyTorch.
  • Experience designing and implementing evaluation metrics and benchmarks for generative models.
  • Solid foundation in statistics, experimental design, and hypothesis testing.
  • Experience with version control (Git) and containerization (Docker).
  • Excellent communication skills with the ability to distill complex evaluation results into actionable insights.
Preferred Skills
  • Experience with human-in-the-loop evaluation systems (Likert-scale annotation, pairwise preference ranking, red-teaming).
  • Familiarity with LLM safety and alignment evaluation (toxicity, hallucination detection, factual grounding).
  • Knowledge of existing benchmark suites (MMLU, Human Eval, HELM, BIG-Bench) and their limitations.
  • Experience building evaluation infrastructure at scale using cloud platforms (AWS, GCP, Azure).
  • Familiarity with MLOps practices and CI/CD pipelines for model validation.
  • Experience with data engineering, large-scale data labeling, or synthetic data generation for evaluation purposes.
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary