×
Register Here to Apply for Jobs or Post Jobs. X

Principal Machine Learning Engineer

Job in Nottingham, Nottinghamshire, NG1, England, UK
Listing for: LSEG
Full Time position
Listed on 2026-02-28
Job specializations:
  • IT/Tech
    AI Engineer, Data Engineer, Machine Learning/ ML Engineer
Salary/Wage Range or Industry Benchmark: 125000 - 150000 GBP Yearly GBP 125000.00 150000.00 YEAR
Job Description & How to Apply Below

Principal Machine Learning Engineer (Sage Maker, MLOps, Model Governance & Explainability)

We are seeking a Principal Machine Learning Engineer to provide technical leadership across the full lifecycle of machine learning systems powering a new matching platform. This role is accountable for defining ML architecture, establishing engineering standards, driving MLOps maturity, and ensuring that our models are scalable, secure, explainable, and governed to enterprise‑grade standards.

You will contribute to the strategic direction of our ML platform—spanning data pipelines, model development, deployment automation, inference runtime design, telemetry, drift detection, and cross‑account productionisation. You will mentor engineers, influence product and architectural decisions, and ensure that our ML systems operate reliably at scale, underpinned by a robust governance and compliance framework.

This is a highly hands‑on, highly technical, principal‑level role that combines architectural vision with deep practical expertise in ML engineering and AWS-native MLOps.

Key Responsibilities Technical Leadership & Architecture
  • Define the end‑to‑end ML architecture for the matching platform, including data pipelines, model training workflows, inference runtimes, and telemetry ecosystems.
  • Lead adoption of best‑in‑class MLOps patterns, platform tooling, and AWS Sage Maker capabilities across training, processing, registry, monitoring, and deployment.
  • Partner with platform, security, and data engineering teams to implement scalable data lakehouse oriented feature architectures and enterprise‑grade ML governance.
  • Champion engineering standards for model quality, documentation, observability, and platform resilience.
Feature Engineering & Data Architecture
  • Architect highly scalable, production‑ready feature pipelines within Lakehouse environments.
  • Set the technical direction for fallback and resilience strategies.
  • Establish and enforce data‑quality guardrails, validation schemas, and monitoring frameworks.
  • Drive adoption and standards for enterprise feature stores.
Model Development & Technical Excellence
  • Lead the design of ranking, scoring, and similarity models tailored to the matching platform requirements.
  • Define model calibration, scoring logic, confidence thresholds, and optimisation strategies.
  • Mentor teams on advanced ML techniques using frameworks such as PyTorch, Tensor Flow, and XGBoost.
  • Review and approve technical designs for complex modeling workflows.
Explainability & Regulatory‑Grade Reasoning
  • Establish explainability standards across the ML stack, using SHAP or equivalent frameworks.
  • Define patterns to generate regulator‑ready reason codes, aligned with compliance requirements.
  • Ensure explainability artefacts are accurate, robust, and traceable across model versions.
ML Deployment & Automation (MLOps)
  • Architect automated training, deployment, and retraining pipelines using AWS Sage Maker.
  • Set standards for model registry usage, automated approvals, and rollback orchestration.
  • Drive infrastructure‑as‑code and CI/CD maturity for ML systems across multiple environments.
  • Lead design of enterprise‑wide weight‑update patterns and lineage‑aware deployment strategies.
Inference Runtime & Cross‑Account Productionisation
  • Architect low‑latency, high‑throughput inference services that meet strict matching platform SLAs.
  • Lead the design of secure cross‑account IAM patterns for model consumption.

    Own end‑to‑end telemetry design, including scoring metrics, latency, error analytics, and SLOs.
  • Partner with platform teams to optimise cost, scale, and reliability of inference endpoints.
Monitoring, Drift Detection & Observability
  • Define observability standards for feature drift, concept drift, performance degradation, and data integrity.
  • Lead the creation of dashboards, benchmarks, and automated alerting across the ML ecosystem.
  • Ensure telemetry pipelines adhere to privacy, data minimisation, and compliance policies.
  • Drive adoption of proactive failover, shadow‑mode testing, and continuous validation patterns.
Security, Compliance & ML Governance
  • Set and enforce ML‑specific security standards including data minimisation, encryption, and PII…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary