×
Register Here to Apply for Jobs or Post Jobs. X

Researcher, Pretraining Safety

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: OpenAI
Full Time position
Listed on 2026-01-12
Job specializations:
  • Software Development
    Data Scientist, AI Engineer
Salary/Wage Range or Industry Benchmark: 125000 - 150000 USD Yearly USD 125000.00 150000.00 YEAR
Job Description & How to Apply Below

Join to apply for the Research Engineer / Scientist, Pretraining Safety role at OpenAI

Get AI-powered advice on this job and more exclusive features.

About The Team

The Safety Systems team is responsible for various safety work to ensure our best models can be safely deployed to the real world to benefit society and is at the forefront of OpenAI’s mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.

The Pretraining Safety Team’s Goal

Build safer, more capable base models and enable earlier, more reliable safety evaluation during training. We aim to:

  • Develop upstream safety evaluations that monitor how and when unsafe behaviors and goals emerge.
  • Create safer priors through targeted pretraining and mid‑training interventions that make downstream alignment more effective and efficient.
  • Design safe‑by‑design architectures that allow for more cont rollability of model capabilities.

In addition, we will conduct foundational research to understand how behaviors emerge, generalize, and can be reliably measured throughout training.

About

The Role

The Pretraining Safety team is pioneering how safety is built into models before they reach post‑training and deployment. In this role, you will work throughout the full stack of model development with a focus on pre‑training:

  • Identify safety‑relevant behaviors as they first emerge in base models.
  • Evaluate and reduce risk without waiting for full‑scale training runs.
  • Design architectures and training setups that make safer behavior the default.
  • Strengthen models by incorporating richer, early safety signals.

We collaborate across OpenAI’s safety ecosystem—from Safety Systems to Training—to ensure that safety foundations are robust, scalable, and grounded in real‑world risks.

In This Role, You Will
  • Develop new techniques to predict, measure, and evaluate unsafe behavior in early‑stage models.
  • Design data curation strategies that improve pretraining priors and reduce downstream risk.
  • Explore safe‑by‑design architectures and training configurations that improve cont rollability.
  • Introduce novel safety‑oriented loss functions, metrics, and evaluations into the pretraining stack.
  • Work closely with cross‑functional safety teams to unify pre‑ and post‑training risk reduction.
You Might Thrive In This Role If You
  • Have experience developing or scaling pretraining architectures (LLMs, diffusion models, multimodal models, etc.).
  • Are comfortable working with training infrastructure, data pipelines, and evaluation frameworks (e.g., Python, PyTorch/JAX, Apache Beam).
  • Enjoy hands‑on research—designing, implementing, and iterating on experiments.
  • Enjoy collaborating with diverse technical and cross‑functional partners (e.g., policy, legal, training).
  • Are data‑driven with strong statistical reasoning and rigor in experimental design.
  • Value building clean, scalable research workflows and streamlining processes for yourself and others.
About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general‑purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement
.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary