×
Register Here to Apply for Jobs or Post Jobs. X

Principal Security Engineer, Trust & Risk

Job in City of Westminster, Central London, Greater London, England, UK
Listing for: AI Security Institute
Full Time position
Listed on 2026-01-11
Job specializations:
  • IT/Tech
    Cybersecurity, AI Engineer
Job Description & How to Apply Below
Position: Staff/Principal Security Engineer, Trust & Risk
Location: City of Westminster

Role Summary

Build and own AISI's risk platform. Automate systems that validate security controls, measure risk posture, and generate assurance evidence. We operate in a regulated government environment, but the goal is real security with auditable outputs, not checkbox compliance. This is a hands‑on engineering role. If you're looking for a traditional GRC analyst position, this isn't it.

Key Responsibilities
  • Build a continuous assurance platform that validates controls programmatically and generates evidence automatically
  • Design policy‑as‑code pipelines that translate regulatory requirements (Gov Assure, CAF) into testable assertions
  • Create integration points that pull evidence from infrastructure, CI/CD, and ML pipelines without manual intervention
  • Automate evidence collection for AI‑specific artefacts: model documentation, evaluation results, release gates, weights handling
  • Build dashboards and reporting that give real‑time visibility into control status
  • Develop tooling that makes compliance invisible to researchers while giving assurance teams what they need
  • Work with platform and security engineers to embed controls at the infrastructure layer
  • Threat‑model evaluation pipelines and fix compliance gaps at the platform layer rather than through manual process
  • Implement controls at the application layer when that's the right place to solve the problem
  • Design and build continuous control validation and evidence pipelines
  • Translate regulatory frameworks into programmatic controls and machine‑checkable artefacts
  • Automate evidence collection across cloud infrastructure, CI/CD, and ML workflows
  • Integrate AI safety artefacts (model documentation, evaluations, red‑team results, release gates) into compliance pipelines
  • Implement controls for model handling, compute governance, and third‑party model/API usage
  • Work with platform, security, and research teams to embed controls at the infrastructure layer
  • Engage governance and assurance teams, understand what they actually need, then build it

Artificial Intelligence can be a useful tool to support your application, however, all examples and statements provided must be truthful, factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others, or generated by artificial intelligence, as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action.

Please see our candidate guidance for more information on appropriate and inappropriate use.

Clearance & Nationality

Successful candidates must undergo a criminal record check and get baseline personnel security standard (BPSS) clearance before they can be appointed. There is a strong preference for eligibility for counter‑terrorist check (CTC) clearance. Some roles may require higher levels of clearance; we will state this by exception in the job advertisement.

Nationality requirements

Qualifications
  • Staff‑level engineering experience (platform, infrastructure, application, or security)
  • Strong programming skills (Python, Go, or similar)
  • Comfortable with IaC, CI/CD pipelines, and cloud platforms (AWS preferred)
  • Experience working in a regulated or security‑conscious environment
  • Interest in treating compliance as an engineering problem
Useful but Teachabe experiences
  • Direct experience with compliance frameworks (Gov Assure, CAF, SOC2, ISO 27001, FedRAMP – any transfer)
  • Familiarity with policy‑as‑code tools (Open Policy Agent, Conftest, Checkov)
  • Exposure to MLOps tooling (MLflow, Weights & Biases, model registries)
  • Understanding of AI/ML workflows and artefacts
About the Team

The AI Security Institute is the world's largest and best‑funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We're in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally. We're here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them.

With our resources, unique agility and international influence, this is the best place to shape both AI…

Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary