×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer; AI​/ML

Job in Greater London, London, Greater London, EC1A, England, UK
Listing for: VML
Full Time position
Listed on 2026-01-13
Job specializations:
  • IT/Tech
    Data Engineer, AI Engineer, Machine Learning/ ML Engineer
Job Description & How to Apply Below
Position: Data Engineer (AI / ML)
Location: Greater London

Role:
Data Engineer (AI / ML)

Role type:
Permanent

Location:

UK or Greece

Preferred start date: ASAP

As an organisation, we push the boundaries of data science, optimisation and artificial intelligence to solve the most complex problems in the industry. Satalia, a WPP company, is a community of individuals devoted to working on diverse and challenging projects, allowing you to flex your technical skills whilst working with a tight‑knit team of high performing colleagues.

Led by our founder and WPP Chief AI Officer Daniel Hulme, Satalia's ambition is to become a decentralised organisation of the future. Today, this involves developing tools and processes to liberate and automate manual repetitive tasks, with a focus on freedom, transparency and trust. At the core of our thinking is an approach to wellbeing and inclusivity. We unpack human behaviour and unpick prejudice to ensure a safe and inviting environment.

We offer truly flexible working and allow our employees to find the working practice that makes them most productive. At Satalia, your opinion matters and your achievements are celebrated.

THE ROLE

We are investing massively in developing next‑generation AI tools for multimodal datasets and a wide range of applications. We are building large‑scale, enterprise‑grade solutions and serving these innovations to our clients and WPP agency partners. As a member of our team, you will work alongside world‑class talent in an environment that not only fosters innovation but also personal growth. You will be at the forefront of AI, leveraging multimodal datasets to build groundbreaking solutions over a multi‑year roadmap.

Your contributions will directly shape cutting‑edge AI products and services that make a tangible impact for FTSE 100 clients.

YOUR RESPONSIBILITIES
  • Collaborate closely with data scientists, architects, and other stakeholders to understand and break down business requirements.
  • Collaborate on schema design, data contracts, and architecture decisions, ensuring alignment with AI/ML needs.
  • Provide data engineering support for AI model development and deployment, ensuring data scientists have access to the data they need in the format they need it.
  • Leverage cloud‑native tools (GCP/AWS/Azure) for orchestrating data pipelines, AI inference workloads, and scalable data services.
  • Develop and maintain APIs for data services and serving model predictions.
  • Support the development, evaluation and productionisation of agentic systems with:
    • LLM‑powered features and prompt engineering
    • Retrieval‑Augmented Generation (RAG) pipelines
    • Multimodal vector embeddings and vector stores
    • Agent development frameworks: ADK, Lang Graph, Autogen
    • Model Context Protocol (MCP) for integrating agents with tools, data and AI services
    • Google's Agent2

      Agent (A2A) protocol for communication and collaboration between different AI agents
  • Implement and optimise data transformations and ETL/ELT processes, using appropriate data engineering tools.
  • Work with a variety of databases and data warehousing solutions to store and retrieve data efficiently.
  • Implement monitoring, troubleshooting, and maintenance procedures for data pipelines to ensure the high quality of data and optimise performance.
  • Participate in the creation and ongoing maintenance of documentation, including data flow diagrams, architecture diagrams, data dictionaries, data catalogues, and process documentation.
MINIMUM QUALIFICATIONS / SKILLS
  • High proficiency in Python and SQL.
  • Strong knowledge of data structures, data modelling, and database operation.
  • Proven hands‑on experience building and deploying data solutions on a major cloud platform (AWS, GCP, or Azure).
  • Familiarity with containerisation technologies such as Docker and Kubernetes.
  • Familiarity with Retrieval‑Augmented Generation (RAG) applications and modern AI/LLM frameworks (e.g., Lang Chain, Haystack, Google GenAI, etc.).
  • Demonstrable experience designing, implementing, and optimising robust data pipelines for performance, reliability, and cost‑effectiveness in a cloud‑native environment.
  • Experience in supporting data science workloads and working with both structured and unstructured data.
  • Experience working with both…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary