×
Register Here to Apply for Jobs or Post Jobs. X

Senior Data Engineer

Job in Greater London, London, Greater London, EC1A, England, UK
Listing for: Publicis Groupe Holdings B.V
Full Time position
Listed on 2026-01-13
Job specializations:
  • IT/Tech
    Data Engineer, AI Engineer, Machine Learning/ ML Engineer, Data Science Manager
Job Description & How to Apply Below
Location: Greater London

Company description

With a history that dates back over 80 years,
Starcom
is a global communications planning and media leader. We are an agency still grounded in our founding principle that people are at the centre of all we do. Each day, we apply this belief to harness the transformative power of data and technology to inspire and move people and business forward. With more than 7,000 employees in over 100 offices around the world, we are the flagship Publicis Media agency that uses our ‘Power of One’ business model, with teams that span multiple disciplines across clients such as Aldi, P&G, P&O Ferries, Primark, Samsung, Stellantis, and Visa.

We place a huge focus on our People and have driven flagship D&I and L&D programmes within Publicis Media; our goal is to help every individual reach their fullest potential and we encourage everyone to make “Brave Plays” in how they approach their work and their own career development. As a result, we have an exceptionally energised and committed talent base, all of us proud of our welcoming and supportive culture, as evidenced by our recognition as one of Campaign’s Best Places to Work for three years in a row (2021, 2022 and 2023) and most excitingly,
Media Week's Agency of the Year 2023!

Overview

What will you be doing?

This role presents an opportunity to engage deeply with MLOps, vector databases, and Retrieval-Augmented Generation (RAG) pipelines – skills that are in incredibly high demand. If you are passionate about shaping the future of AI and thrive on complex, high-impact challenges, we encourage you to apply.

Responsibilities

As a Senior Data Engineer for AI/ML, you will be the architect and builder of the data infrastructure that feeds our intelligent systems. Your responsibilities will include:

  • Design and Build Scalable Data Pipelines: Architect, implement, and optimize robust, high-performance real-time and batch ETL pipelines to ingest, process, and transform massive datasets for LLMs and foundational AI models.
  • Cloud-Native Innovation: Leverage your deep expertise across AWS, Azure, and/or GCP to build cloud-native data solutions, ensuring efficiency, scalability, and cost-effectiveness.
  • Power Generative AI: Develop and manage specialized data flows for generative AI applications, including integrating with vector databases and constructing sophisticated RAG pipelines.
  • Champion Data Governance & Ethical AI: Implement best practices for data quality, lineage, privacy, and security, ensuring our AI systems are developed and used responsibly and ethically.
  • Tooling the Future: Get hands-on with cutting-edge technologies like Hugging Face, PyTorch, Tensor Flow, Apache Spark, Apache Airflow, and other modern data and ML frameworks.
  • Collaborate and Lead: Partner closely with ML Engineers, Data Scientists, and Researchers to understand their data needs, provide technical leadership, and translate complex requirements into actionable data strategies.
  • Optimize and Operate: Monitor, troubleshoot, and continuously optimize data pipelines and infrastructure for peak performance and reliability in production environments.

What You'll Bring:

We are seeking a seasoned professional who is excited by the unique challenges of AI data.

Qualifications

What are we looking for?

Must-Have

Skills:

  • Extensive Data Engineering

    Experience:

    Proven track record (3+ years) in designing, building, and maintaining large-scale data pipelines and data warehousing solutions.
  • Cloud Platform Mastery: Expert-level proficiency with at least one major cloud provider (GCP-Preferred, AWS, or Azure), including their data, compute, and storage services.
  • Programming Prowess: Strong programming skills in Python and SQL are essential.
  • Big Data Ecosystem Expertise: Hands‑on experience with big data technologies like Apache Spark, Kafka, and data orchestration tools such as Apache Airflow or Prefect.
  • ML Data Acumen: Solid understanding of data requirements for machine learning models, including feature engineering, data validation, and dataset versioning.
  • Vector Database

    Experience:

    Practical experience working with vector databases (e.g., Pinecone, Milvus, Chroma) for embedding storage and retrieval.
  • Generative…
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary