×
Register Here to Apply for Jobs or Post Jobs. X
More jobs:

Data Engineer - Engine Starling

Job in City of Westminster, Central London, Greater London, England, UK
Listing for: The Engine
Full Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    Data Engineer
Salary/Wage Range or Industry Benchmark: 60000 - 80000 GBP Yearly GBP 60000.00 80000.00 YEAR
Job Description & How to Apply Below
Position: Data Engineer - Engine by Starling
Location: City of Westminster

As Engine is Starling's SaaS offering, we hold all of the data that is needed to run our client banks. We need to model, extract, join, format and ultimately securely share data with our clients so they can get insights into their business, build regulatory reports and run marketing campaigns. We're already sharing millions of rows of data with our clients everyday and this is set to grow over the coming years.

We're investing in our internal and external reporting tooling so we can give our clients better insights, faster and support internal operations of Engine. As a Data Engineer you'll be at the heart of our reporting tooling, adding new data features and improving how we expose new entities to our clients and operations teams. You'll also be helping to build tooling so we can get better visibility into data lineage, data quality and how accurate our documentation is.

You'll also be assisting our platform engineers to improve modelling of new features in a way that helps clients to use the data later. Engine Engineers are excited about helping us deliver new features, regardless of what their primary tech stack may be.

Responsibilities
  • Shape the future of data for Engine, including approaches, tooling and architecture.
  • Develop data as a core product offering for Engine both internally and for our clients, working with and responding to client feedback and market analysis.
  • Work across the boundary of software engineering and core data platform challenges.
  • Understand, build and develop data integration and warehousing solutions.
  • Deliver exceptional data solutions promoting a self‑service culture through trusted pipelines, quality checks, clear documentation, lineage, entity relationships and governance.
  • Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, etc.
  • Coach and mentor software engineers in the ways of data engineering across the organisation.
  • Obtain a wide and varied understanding of how our internal teams and client banks operate.
  • Work with cloud‑based infrastructure (AWS, GCP) for hosting data solutions and applications.
  • Collaborate with clients, solution architects and other engineers to help meet the client goals.
Interview Process

Interviewing is a two‑way process and we want you to have the time and opportunity to get to know us, as much as we are getting to know you! Our interviews are conversational and we want to get the best from you, so come with questions and be curious. In general you can expect the following steps, following a chat with one of our Talent Team:

  • Initial interview with our Staff Data Engineer – ~45 minutes
  • Take home technical test to be discussed in the next interview
  • Technical interview with some Engineers – ~1.5 hours
  • Final interview with our CTO / deputy CTO – ~45 minutes
Qualifications
  • Proven experience in development and maintenance of a cloud‑based data warehouse.
  • Strong experience with SQL and relational databases (preferably Postgres); working with Change Data Capture is a bonus.
  • Data modelling knowledge, breaking down backend logic to understand and form a holistic data model (e.g., 3NF, star schema, Data Vault).
  • Strong experience with Python, Type Script or Java (a significant amount of work will be in Java - it is not expected for you to know it today but to learn from the team as it makes up a large part of the stack).
  • Good knowledge of Data Engineering tooling such as dbt or Spark. CDC tools like Debezium are a bonus.
  • Build data systems with a software and infrastructure engineer mindset, including tested, scalable, resilient, fault‑tolerant, observable and "as code" practices.
  • Good understanding of Dev Ops practices, Infrastructure as Code & Continuous Integration / Continuous Deployment.
Desirable
  • Experience extracting, loading and transforming large data sets (> 100GB).
  • Experience with schema evolution tools such as flyway or liquibase.
  • Experience with AWS (S3, IAM, RDS).
  • Translate internal data user needs into building BI Dashboards to answer their key business questions.
  • Data capabilities outside of engineering (e.g., data catalogue, data modelling, data lineage, data governance, data…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary