×
Register Here to Apply for Jobs or Post Jobs. X

AI Test Lead; AI Foundry

Job in Leeds, West Yorkshire, ME17, England, UK
Listing for: Gazelle Global
Full Time position
Listed on 2026-02-17
Job specializations:
  • IT/Tech
    Data Security, Cybersecurity, AI Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 125000 GBP Yearly GBP 100000.00 125000.00 YEAR
Job Description & How to Apply Below
Position: AI Test Lead (AI Foundry)

We are looking for an experienced AI Test Leads (AI Foundry) to ensure AI and Copilot solutions are safe, reliable and compliant, covering both traditional QA and AI‑specific risks (bias, hallucination, explainability). The role defines assurance methods, quality gates and post‑deployment monitoring to meet internal policy and regulator expectations.

  • Design and manage the enterprise testing strategy for AI/Copilot, blending traditional QA with AI‑specific methods.
  • Define test approaches for functional, performance, accuracy, reliability, ethical compliance and bias detection.
  • Validate explainability, traceability and safety controls against policy and regulatory requirements.
  • Evaluate and test human‑in‑the‑loop workflows and decision checkpoints for appropriate oversight.
  • Embed quality gates in iterative delivery, preventing progression without assurance evidence.
  • Develop and maintain specialized test datasets, including adversarial, low‑quality, domain‑specific and edge‑case inputs, to rigorously challenge model robustness and identify systemic weaknesses.
  • Provide AI test engineering support to delivery squads, advising on model‑readiness criteria, testability risks, and quality implications of design decisions, ensuring solutions are verifiable throughout the lifecycle.
  • Define and run post‑deployment validation, drift detection, incident triage and continuous model‑monitoring.
  • Partner with Risk, Legal, Security and Compliance teams to meet control frameworks and audit standards.
  • Provide inputs to risk/impact assessments, policy adherence checks and governance submissions.
  • Lead incident investigations for unexpected AI behaviours, conducting deep‑dive root‑cause analysis across data quality, model logic, prompt flows, integration layers and human‑in‑the‑loop steps; identify systemic failure points, recommend corrective actions, and drive end‑to‑end remediation to prevent recurrence.
  • Maintain test documentation, evaluation logs, datasets and reproducible evidence for audit.
  • Uplift AI testing capability across teams through standards, templates, training and hands‑on support.
  • Champion continuous improvement of AI assurance, evaluating new testing tooling (LLM‑monitoring, bias‑scanners, prompt‑diff tools, synthetic data generators) and maturing standards as organisational AI adoption scales.
  • Ensure responsible AI principles (e.g., transparency, explainability, ISO
    42001) are incorporated into all development.
  • Provide insight to support business cases, investment decisions, risk assessments, and prioritisation discussions at AI governance forums.
  • Managing escalations supporting the wider Data & AI Leadership team.
Functional/Technical (Role Specific)
  • Higher education qualification (or equivalent experience) in Ethics, Law, Risk Management, Social Sciences, Data/Computer Science or relevant field
  • Experience with designing and leading testing for complex digital or data‑driven systems, including multi‑component architectures, API‑integrated platforms, event‑driven workflows and systems operating under regulatory or high‑assurance constraints.
  • Clear understanding of AI‑specific risks such as hallucinations, bias, drift, explainability gaps, safety breaches and misuse pathways, paired with the ability to design targeted tests that uncover model blind spots and systemic weaknesses.
  • Familiarity with governance, audit and regulatory standards for AI, data and digital services, ensuring testing evidence aligns with internal risk frameworks, ISO
    42001 controls, Responsible AI policies and external regulatory expectations.
  • Experience developing structured QA strategies that integrate traditional and AI‑specific assurance, mapping out test plans, risk‑based prioritisation, acceptance criteria, model‑readiness thresholds and quality gates aligned to lifecycle stages.
  • Ability to define and execute test plans across functional, non‑functional, ethical and performance dimensions, validating accuracy, latency, robustness, security, fairness, reliability and user‑journey consistency.
  • Strong analytical mindset with the ability to identify root causes of defects or unexpected AI behaviour, performing deep‑dive diagnostics across data…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary