Senior Data Engineer
Listed on 2026-02-28
-
IT/Tech
Data Engineer, Cloud Computing
Location: Greater London
We are seeking a Senior Data Engineer to take ownership of the design, build, and evolution of our data platform. In this role, you will work closely with cross-functional teams to translate complex business challenges into elegant, scalable data solutions. You will be expected to write high-quality, maintainable, and thoroughly tested code that aligns with modern engineering standards. As a senior contributor, you will play a key role in guiding technical direction, supporting team development, and mentoring other engineers.
You will join at an exciting phase of transformation as we modernise our legacy data stack — moving from Python scripts and MySQL to a scalable, cloud-native architecture built on Databricks, Snowflake, Kafka, Python, and PySpark. This transition is a core strategic initiative, and you will have the opportunity to lead and shape it.
What You’ll Do- Architect and advance a configuration-led data framework that allows non-technical users to define regulatory logic.
- Develop and maintain scalable data pipelines processing transaction-level data using platforms such as Databricks, Snowflake, Kafka, Python, and PySpark.
- Implement robust data quality frameworks, including automated testing, validation layers, monitoring, and alerting mechanisms.
- Take full ownership of pipelines from initial design through deployment, operational monitoring, and continuous improvement.
- Work closely with Product and Engineering teams to convert regulatory and business requirements into efficient technical solutions.
- Mentor team members, share knowledge, and contribute to raising overall engineering standards.
- Drive ongoing enhancements to tooling, processes, and development workflows.
- Partner with Dev Ops to ensure optimal performance and cost-efficiency of Snowflake and Databricks infrastructure.
- Strong hands-on experience building data pipelines using Python and PySpark, including performance optimisation of Spark workloads.
- Proven experience designing auditable and reproducible data pipelines with strong lineage, idempotency, and replay/backfill capabilities — particularly in regulated or high-accuracy environments.
- Advanced SQL skills, including writing and tuning complex queries across large-scale datasets.
- Solid grounding in data modelling principles and database architecture.
- Strong software engineering practices applied to data systems (clean, modular code; automated testing; CI/CD; observability).
- Comfortable working independently with full accountability for the systems you design and maintain.
- Experience using orchestration frameworks and modern cloud data warehouse platforms.
- Ability to interpret regulatory or compliance-driven requirements and convert them into technical specifications alongside non-technical stakeholders.
- Experience operating Kafka or similar event-streaming platforms.
- Hands-on exposure to AWS cloud infrastructure.
- Experience building greenfield data platforms or leading legacy-to-modern data migrations.
- Background in Fin Tech or financial services.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search: