×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Senior Data Engineer

Job in City Of London, Central London, Greater London, England, UK
Listing for: Pantheon
Full Time position
Listed on 2026-01-14
Job specializations:
  • IT/Tech
    Data Engineer
Job Description & How to Apply Below
Location: City Of London

Pantheon has been at the forefront of private markets investing for more than 40 years, earning a reputation for providing innovative solutions covering the full lifecycle of investments, from primary fund commitments to co‑investments and secondary purchases, across private equity, real assets and private credit.

We have partnered with more than 650 clients, including institutional investors of all sizes as well as a growing number of private wealth advisers and investors, with approximately $65 bn in discretionary assets under management (as of December 31, 2023).

Leveraging our specialized experience and global team of professionals across Europe, the Americas and Asia, we invest with purpose and lead with expertise to build secure financial futures.

Pantheon is undergoing a multi‑year program to build out a new best‑in‑class Data Platform using cloud‑native technologies hosted in Azure. We require an experienced and passionate hands‑on Senior Data Engineer to design and implement new data pipelines for adaptation to business and or technology changes. This role will be integral to the success of this program and to establishing Pantheon as a data‑centric organization.

You will be working with a modern Azure tech stack and proven experience of ingesting and transforming data from a variety of internal and external systems is core to the role.

You will be part of a small and highly skilled team, and you will need to be passionate about providing best‑in‑class solutions to our global user base.

Key Responsibilities
  • Design, build, and maintain scalable, secure, and high‑performance data pipelines on Azure, primarily using Azure Databricks, Azure Data Factory, and Azure Functions.
  • Develop and optimise batch and streaming data processing solutions using PySpark and SQL to support analytics, reporting, and downstream data products.
  • Implement robust data transformation layers using dbt, ensuring well‑structured, tested, and documented analytical models.
  • Collaborate closely with business analysts, QA teams, and business stakeholders to translate data requirements into reliable technical solutions.
  • Ensure data quality, reliability, and observability through automated testing, monitoring, logging, and alerting.
  • Lead on performance tuning, cost optimisation, and capacity planning across Databricks and associated Azure services.
  • Implement and maintain CI/CD pipelines using Azure Dev Ops, promoting best practices for version control, automated testing, and deployment.
  • Enforce data governance, security, and compliance standards, including access controls, data lineage, and auditability.
  • Contribute to architectural decisions and provide technical leadership, mentoring junior engineers and setting engineering standards.
  • Produce clear technical documentation and contribute to knowledge sharing across the data engineering function.
Knowledge & Experience Required Essential Technical Skills
  • Python and PySpark for large‑scale data processing.
  • SQL (advanced querying, optimisation, and data modelling).
  • Azure Data Factory (pipeline orchestration and integration).
  • Azure Dev Ops (Git, CI/CD pipelines, release management).
  • Azure Functions / serverless data processing patterns.
  • Data modelling (star schemas, data vault, or lakehouse‑aligned approaches).
  • Data quality, testing frameworks, and monitoring/observability.
  • Strong problem‑solving ability and a pragmatic, engineering‑led mindset.
  • Experience in Agile SW development environment.
  • Excellent communication skills, with the ability to explain complex technical concepts to both technical and non‑technical stakeholders.
  • Leadership and mentoring capability, with a focus on raising engineering standards and best practices.
  • Significant commercial experience (typically 5+ years) in data engineering roles, with demonstrable experience designing and operating production‑grade data platforms.
  • Strong hands‑on experience with Azure Databricks, including cluster configuration, job orchestration, and performance optimisation.
  • Proven experience building data pipelines with Databricks and Azure Data Factory; integrating with Azure‑native services (e.g., Data Lake Storage Gen2, Azure Functions).
  • Advanced…
Position Requirements
10+ Years work experience
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary