×
Register Here to Apply for Jobs or Post Jobs. X

Red Teaming Lead, Responsibility

Job in Mountain View, Santa Clara County, California, 94039, USA
Listing for: Google DeepMind
Full Time position
Listed on 2026-01-12
Job specializations:
  • Engineering
    AI Engineer, Systems Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 125000 USD Yearly USD 100000.00 125000.00 YEAR
Job Description & How to Apply Below

This role works with sensitive content or situations and may be exposed to graphic, controversial, and/or upsetting topics or content.

As Red Teaming Lead in Responsibility at Google Deep Mind, you will be working with a diverse team to drive and grow red teaming of Google Deep Mind's most groundbreaking models. You will be responsible for our frontier risk red teaming program, which probes for and identifies emerging model risks and vulnerabilities. You will pioneer the latest red teaming methods with teams across Google Deep Mind and external partners to ensure that our work is conducted in line with responsibility and safety best practices, helping Google Deep Mind to progress towards its mission.

About us

Artificial Intelligence could be one of humanity’s most useful inventions. At Google Deep Mind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

The

role

As a Red Teaming Lead working in Responsibility, you'll be responsible for managing and growing our frontier risk red teaming program. You will be conducting hands‑on red teaming of advanced AI models, partnering with external organizations on red teaming exercises, and working closely with product and engineering teams to develop the next generation of red teaming tooling. You’ll be supporting the team across the full range of development, from running early tests to developing higher-level frameworks and reports to identify and mitigate risks.

Key responsibilities

  • Leading and managing the end‑to‑end responsibility & safety red teaming programme for Google Deep Mind.
  • Designing and implementing expert red teaming of advanced AI models to identify risks, vulnerabilities, and failure modes across emerging risk areas such as CBRNe, Cyber and socioaffective behaviours.
  • Partnering with external red teamers and specialist groups to design and execute novel red teaming exercises.
  • Collaborating closely with product and engineering teams to design and develop innovative red teaming tooling and infrastructure.
  • Converting high‑level risk questions into detailed testing plans, and implementing those plans, influencing others to support as necessary.
  • Working collaboratively alongside a team of multidisciplinary specialists to deliver on priority projects and incorporate diverse considerations into projects.
  • Communicating findings and recommendations to wider stakeholders across Google Deep Mind and beyond.
  • Providing an expert perspective on AI risks, testing methodologies, and vulnerability analysis in diverse projects and contexts.
About you

In order to set you up for success in this role, we are looking for the following skills and experience:

  • Demonstrated experience running or managing red teaming or novel testing programs, particularly for AI systems.
  • A strong, comprehensive understanding of sociotechnical AI risks from recognized systemic risks to emergent risk areas.
  • A solid technical understanding of how modern AI models, particularly large language models, are built and operate.
  • Strong program management skills with a track record of successfully delivering complex, cross‑functional projects.
  • Demonstrated ability to work within cross‑functional teams, fostering collaboration, and influencing outcomes.
  • Ability to present complex technical findings to both technical and non‑technical teams, including senior stakeholders.
  • Ability to thrive in a fast‑paced environment, and an ability to pivot to support emerging needs.
  • Demonstrated ability to identify and clearly communicate challenges and limitations in testing approaches and analyses.

In addition, the following would be an advantage:

  • Direct, hands‑on experience in safety evaluations and developing mitigations for advanced AI systems.
  • Experience with a range of experimentation and evaluation techniques, such as human study research, AI or product red‑teaming, and content rating processes.
  • Experience working with product development or…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary