×
Register Here to Apply for Jobs or Post Jobs. X

Data Engineer, Business Operations

Job in Greater London, London, Greater London, EC1A, England, UK
Listing for: Nscale
Full Time position
Listed on 2026-01-16
Job specializations:
  • IT/Tech
    Data Engineer, Data Analyst
Salary/Wage Range or Industry Benchmark: 60000 - 80000 GBP Yearly GBP 60000.00 80000.00 YEAR
Job Description & How to Apply Below
Location: Greater London

Data Engineer, Business Operations, London

About Nscale

Nscale is the GPU cloud engineered for AI. We provide cost‑effective, high‑performance infrastructure for AI start‑ups and large enterprise customers. Nscale enables AI‑focused companies to achieve superior results by reducing the complexity of AI development. Our GPU cloud bolsters technical capabilities and directly supports strategic business outcomes, including cost management, rapid innovation, and environmental responsibility.

We thrive on a culture of relentless innovation, ownership, and accountability, where every team member takes pride in their work and drives it with excellence and urgency. As an Nscaler, you’ll build trust through openness and transparency, where everyone is inspired to do their best work. If you join our team, you’ll be contributing to building the technology that powers the future.

About the Role

We’re looking for a Data Engineer to help design, build, and operate the data foundations that underpin Nscale’s platform, internal operations, and customer‑facing capabilities.

This is a high‑impact, early‑stage role. You’ll work closely with Operations, Infrastructure, Platform Engineering, Product, and Commercial teams to turn raw operational signals — from GPUs, clusters, customers, and internal systems — into reliable, scalable data products that ensure delivery at a previously unseen pace. You’ll help define how data is collected, modelled, served, and trusted across the company, and have the opportunity to build on Palantir Foundry, as well as other products.

This role is ideal for someone who enjoys building data systems from first principles, thrives in ambiguous environments, and wants to see their work directly influence product decisions, platform reliability, and customer outcomes.

What you'll be doing

  • Design and build scalable, reliable data pipelines that ingest data from infrastructure, platform services, and business systems.
  • Define data models and schemas that support operational workflows and use cases across the business, monitoring, and analytics.
  • Clean, transform and structure the data to create a digital twin of Nscale.
  • Implement permissioning and manage access and security of the Foundry implementation.
Data Products & Enablement
  • Create trusted datasets and metrics that power workflows and processes, internal tools, and customer‑facing insights.
  • Enable self‑serve analytics by establishing clear data contracts, documentation, and semantic layers.
  • Build use cases including but not limited to capacity planning, cost optimisation, reliability analysis, and customer reporting to drive our business forward.
  • Collaborate with Product and Commercial teams to translate real‑world questions into robust data solutions.
Reliability, Quality & Governance
  • Implement data quality checks, monitoring, and alerting to ensure data correctness and availability.
  • Codify data lineage, freshness, and consistency across systems.
  • Establish best practices around data versioning, access control, and governance appropriate for a fast‑scaling company.
  • Continuously improve system resilience and observability.
Early‑Stage Ownership & Growth
  • Take end‑to‑end ownership of projects, from design through to production and iteration.
  • Help define standards, tooling, and ways of working for data at Nscale.
  • Contribute to technical decision‑making as the company scales its platform and customer base.
  • Act as a thought partner to engineers and operators, not just a service function.

About You

  • Deep, hands‑on experience building in Palantir Foundry, including ontology modelling, pipeline development, API integration, and large‑scale data platform design.
  • Strong proficiency in Python, with experience applying data engineering libraries and frameworks (e.g. Spark, PySpark, Dask, pandas) to work with large, complex datasets.
  • Familiarity with API‑driven data integration, including REST, Graph

    QL, and Foundry Action APIs.
  • Practical experience working in Git‑based development workflows, including code reviews, version control, and CI/CD pipelines.
  • Comfort working in ambiguous, early‑stage environments where requirements evolve quickly.
  • Strong communication skills — able to…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary