Machine Learning Engineer
Listed on 2026-02-28
-
IT/Tech
Data Engineer, AI Engineer, Cloud Computing, Systems Engineer
Location: Hybrid (Birmingham, or London)
Permanent
We are TXP. We help businesses and organisations move forward, at pace and believe in the transformative power of combining technology and people. By providing consulting expertise, development services and resourcing, we work closely with organisations to solve their most complex business problems.
Our work transforms organisations – and we take that responsibility seriously. We focus on success, pursue excellence and take ownership of everything we do.
But achieving that level of performance requires an inclusive and supportive working environment. We believe in the power of technology and people, and we help everyone here to succeed. At TXP, you can multiply your potential.
Role Purpose
The Machine Learning Engineer is a client-facing delivery role responsible for taking machine learning models from prototype to production. Operating within a consulting model, this role bridges data science and platform engineering, ensuring that models developed during engagements are deployed as reliable, scalable and maintainable services. The role owns the full productionisation lifecycle, from environment configuration and pipeline orchestration through to performance monitoring and scaling.
Close collaboration with data scientists is fundamental, translating their analytical work into robust systems that deliver sustained business value.
Key Responsibilities
- Take machine learning models developed by data scientists and re-engineer them for production deployment.
- Refactor prototype code into clean, modular, tested Python packages with clear separation of concerns.
- Implement inference pipelines that handle data validation, preprocessing, prediction and post-processing as a single deployable unit.
- Ensure models meet non-functional requirements including latency, throughput, reliability and resource efficiency before release.
- Manage the transition from notebook-based experimentation to production-grade services with minimal loss of analytical intent.
Environment Tuning and Scaling
- Configure and optimise compute environments on Azure AI Foundry, including managed endpoints, compute clusters and containerised deployments.
- Right-size infrastructure for model serving workloads, balancing cost against performance and availability requirements.
- Implement auto-scaling strategies for inference endpoints to handle variable demand patterns.
- Tune runtime configurations including batch sizes, concurrency settings, memory allocation and GPU utilisation where applicable.
- Conduct load testing and performance benchmarking to validate deployment readiness under expected and peak conditions.
Code and Pipeline Management
- Design and maintain CI/CD pipelines for model training, validation and deployment using Azure Dev Ops or Git Hub Actions.
- Implement automated model retraining pipelines triggered by schedule, data drift or performance degradation.
- Manage model versioning, artefact storage and promotion workflows through Azure AI Foundry model registry.
- Enforce code quality standards through automated linting, unit testing, integration testing and code review processes.
Model Monitoring and Operations
- Implement monitoring for deployed models covering prediction drift, data drift, feature distribution shifts and performance degradation.
- Build alerting and escalation workflows that trigger investigation or automated retraining when thresholds are breached.
- Maintain logging and observability across inference pipelines to support debugging, audit and compliance requirements.
- Produce operational runbooks and incident response procedures for model services.
- Track model performance against business metrics, not just statistical metrics, to ensure continued value delivery.
Collaboration with Data Scientists
- Work alongside data scientists from early in the model development lifecycle to ensure production readiness is designed in, not retrofitted.
- Provide guidance on coding standards, testing practices and architectural patterns that accelerate the path from prototype to deployment.
- Review data science code for production suitability, identifying scalability risks, dependency issues and maintainability concerns.
- Jointly define model interfaces, input/output schemas and contract testing approaches to decouple model development from serving infrastructure.
- Facilitate structured handover processes that capture model assumptions, training procedures, known limitations and retraining requirements.
Client Delivery
- Operate within consulting delivery frameworks, managing scope, timelines and stakeholder expectations.
- Contribute to estimation, solution architecture and proposal development for MLOps and model deployment work streams.
- Present deployment architectures, operational plans and trade-off analyses to client technical and business stakeholders.
- Conduct knowledge transfer sessions and produce handover documentation for client engineering teams.
- Identify opportunities to improve client ML maturity through better tooling, processes and…
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: