×
Register Here to Apply for Jobs or Post Jobs. X

Research, Post-Training San Francisco

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: Thinking Machines Lab Inc.
Apprenticeship/Internship position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    Data Scientist, Machine Learning/ ML Engineer, AI Engineer, Data Engineer
Salary/Wage Range or Industry Benchmark: 350000 - 475000 USD Yearly USD 350000.00 475000.00 YEAR
Job Description & How to Apply Below

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.

We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

The role of post-training researchers sits at the core of our roadmap. This is the critical bridge between raw model intelligence and a system that is actually useful, safe, and collaborative for humans.

This role blends fundamental research and practical engineering, as we do not distinguish between the two roles internally. You will be expected to write high-performance code and read technical reports. It’s an excellent fit for someone who enjoys both deep theoretical exploration and hands-on experimentation, and who wants to shape the foundations of how AI learns.

Note:

This is an "evergreen role" that we keep open on an ongoing basis to express interest in this research area. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months.

You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.

What You’ll Do
  • Develop and tune the recipe: iterate on post-training recipes, consisting of a collection of datasets, training stages, and hyperparameters. Measure how recipe choices affect various metrics.
  • Iterate on evals: post-training involves a never-ending loop of defining a set of evaluations, optimizing them, and then realizing your existing evals don’t capture what matters. You’ll be responsible for both making numbers go up, and making sure the numbers are meaningful.
  • Debug and understand: while tuning the details of a training configuration, we often observe results that don’t quite make sense. You’ll be responsible for both getting things to work, and developing a deeper understanding, which we can bring to the next problem.
  • Scale and explore: post-training will involve a combination of scaling the existing methodologies and developing new ones. We’ll want to both measure how performance metrics scale with dataset size, and explore using a completely different kind of training dataset.
  • Publish and present research that moves the entire community forward. Share code, datasets, and insights that accelerate progress across industry and academia.
Skills and Qualifications
  • Proficiency in Python and familiarity with at least one deep learning framework (e.g., PyTorch, Tensor Flow, or JAX). Comfortable with debugging distributed training and writing code that scales.
  • Bachelor’s degree or equivalent experience in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding.
  • Clarity in communication, an ability to explain complex technical concepts in writing.

Preferred qualifications — we encourage you to apply even if you don’t meet all preferred qualifications, but at least some:

  • A strong grasp of probability, statistics, and ML fundamentals. You can look at experimental data and distinguish between real effects, noise, and bugs.
  • Prior experience with RLHF, RLAIF, preference modeling, or reward learning for large models.
  • Experience managing or analyzing human data collection campaigns or large-scale annotation workflows.
  • Research or engineering contributions in alignment, data-centric AI, or human-AI collaboration.
  • PhD in Computer Science, Machine Learning, Physics, Mathematics, or a related discipline with strong theoretical and empirical grounding; or, equivalent industry research experience.
Logistics
  • Location: This role is based in San Francisco, California.
  • Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
  • Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
  • Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.

As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary