Machine Learning Engineer
Listed on 2026-01-13
-
IT/Tech
AI Engineer, Machine Learning/ ML Engineer, Data Engineer, Cloud Computing
Location: Greater London
At TWG Group Holdings, LLC ("TWG Global"), we drive innovation and business transformation across a range of industries, including financial services, insurance, technology, media, and sports, by leveraging data and AI as core assets. Our AI‑first, cloud‑native approach delivers real‑time intelligence and interactive business applications, empowering informed decision‑making for both customers and employees.
We prioritize responsible data and AI practices, ensuring ethical standards and regulatory compliance. Our decentralized structure enables each business unit to operate autonomously, supported by a central AI Solutions Group, while strategic partnerships with leading data and AI vendors fuel game‑changing efforts in marketing, operations, and product development.
You will collaborate with management to advance our data and analytics transformation, enhance productivity, and enable agile, data‑driven decisions. By leveraging relationships with top tech startups and universities, you will help create competitive advantages and drive enterprise innovation.
At TWG Global, your contributions will support our goal of sustained growth and superior returns, as we deliver rare value and impact across our businesses.
The RoleAs the Staff Machine Learning Engineer (VP) on the ML Engineering team, you will be responsible for designing, deploying, and scaling advanced ML systems that power core business functions across the enterprise. Reporting to the Executive Director of ML Engineering, you will play a critical role in building production‑grade ML infrastructure, reusable frameworks, and scalable model pipelines that deliver measurable business outcomes—ranging from cost optimization to top‑line growth.
You will act as a technical thought leader and strategic partner, shaping the organization's machine learning engineering practices and fostering a culture of operational excellence, reliability, and responsible AI adoption.
- Architect and deploy ML systems and platforms that solve high‑impact business problems across regulated enterprise environments.
- Lead the development of production‑ready pipelines, including feature stores, model registries, and scalable inference services.
- Champion MLOps best practices (CI/CD for ML, model versioning, monitoring, observability) to ensure models are reliable, reproducible, and cost‑efficient.
- Partner with Data Scientists to operationalize experimental models, enabling scalability and generalizability across diverse business domains.
- Integrate emerging ML engineering techniques (e.g., LLM deployment, fine‑tuning pipelines, vector databases, RAG systems) into enterprise‑ready solutions.
- Own the design of foundational ML platforms and frameworks that serve as building blocks for downstream AI applications.
- Embed controls, governance, and auditability into ML workflows, ensuring compliance with regulatory standards and responsible AI principles.
- Collaborate with Engineering, Product, and Security teams to embed ML‑driven decision‑making into enterprise platforms and workflows.
- Define and track engineering and model performance metrics (latency, scalability, cost, accuracy) to optimize systems in production.
- Mentor and coach ML engineers, fostering technical excellence, collaboration, and innovation within the AI Science team.
- 8+ years of experience building and deploying machine learning systems in production environments at enterprise or platform scale.
- Proven track record of leading ML engineering projects from architecture to deployment, including ownership of production‑grade systems.
- Deep expertise in ML frameworks and engineering stacks (Tensor Flow, PyTorch, JAX, Ray, MLflow, Kubeflow).
- Proficiency in Python and at least one backend language (e.g., Java, Scala, Go, C++).
- Strong understanding of cloud ML infrastructure (AWS Sage Maker, GCP Vertex AI, Azure ML) and containerized deployments (Kubernetes, Docker).
- Hands‑on experience with data and model pipelines (feature stores, registries, distributed training, inference scaling).
- Knowledge of observability and monitoring stacks (Prometheus, Grafana, ELK, Datadog) for ML system performance.
- Exp…
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: