×
Register Here to Apply for Jobs or Post Jobs. X

Privacy Research Engineer, Safeguards

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: The Rundown AI, Inc.
Full Time position
Listed on 2026-01-17
Job specializations:
  • IT/Tech
    Data Scientist, AI Engineer
Salary/Wage Range or Industry Benchmark: 200000 - 250000 USD Yearly USD 200000.00 250000.00 YEAR
Job Description & How to Apply Below

About the Role

We are looking for researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for models to interact with private user data. In this role, you'll design and implement privacy-preserving techniques, audit our current techniques, and set the direction for how Anthropic handles privacy more broadly.

Responsibilities:
  • Lead our privacy analysis of frontier models, carefully auditing the use of data and ensuring safety throughout the process
  • Develop privacy-first training algorithms and techniques
  • Develop evaluation and auditing techniques to measure the privacy of training algorithms
  • Work with a small, senior team of engineers and researchers to enact a forward-looking privacy policy
  • Advocate on behalf of our users to ensure responsible handling of all data
You may be a good fit if you have:
  • Experience working on privacy-preserving machine learning
  • A track record of shipping products and features inside a fast-moving environment
  • Strong coding skills in Python and familiarity with ML frameworks like PyTorch or JAX.
  • Deep familiarity with large language models, how they work, and how they are trained
  • Have experience working with privacy-preserving techniques (e.g., differential privacy and how it is different from k-anonymity, l-diversity, and t-closeness)
  • Experience supporting fast-paced startup engineering teams
  • Demonstrated success in bringing clarity and ownership to ambiguous technical problems
  • Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics
Strong candidates may also:
  • Have published papers on the topic of privacy-preserving ML at top academic venues
  • Prior experience training large language models (e.g., collecting training datasets, pre-training models, post-training models via fine-tuning and RL, running evaluations on trained models)
  • Prior experience developing tooling to support privacy-preserving ML (e.g., differential privacy in TF-Privacy or Opacus)
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary