×
Register Here to Apply for Jobs or Post Jobs. X

Software Engineer - AI Workbench

Job in London, Greater London, W1B, England, UK
Listing for: PhysicsX Ltd
Full Time position
Listed on 2026-02-28
Job specializations:
  • Software Development
    Data Engineer, Software Engineer
Salary/Wage Range or Industry Benchmark: 100000 - 125000 GBP Yearly GBP 100000.00 125000.00 YEAR
Job Description & How to Apply Below

Physics

X is a deep-tech company with roots in numerical physics and Formula One, dedicated to accelerating hardware innovation at the speed of software.

We are building an AI-driven simulation software stack for engineering and manufacturing across advanced industries. By enabling high-fidelity, multi-physics simulation through AI inference across the entire engineering lifecycle, Physics

X unlocks new levels of optimization and automation in design, manufacturing, and operations — empowering engineers to push the boundaries of possibility. Our customers include leading innovators in Aerospace & Defense, Materials, Energy, Semiconductors, and Automotive.

The Role

Physics

X is developing a platform used by Data Scientists and Simulation Engineers to build, train, and deploy Deep Physics Models. The core of this platform relies on handling massive volumes of complex simulation data, enabling high-fidelity multi-physics simulation through AI inference.

We are looking for a Software Engineer with a strong background in building data platforms to join our team. You will not just be moving data from A to B; you will be architecting and building the distributed systems, services, and APIs that form the backbone of our data strategy. You will bridge the gap between complex physical simulations and modern data infrastructure, implementing storage solutions for AI/ML pipelines and creating the analytical layers that allow our engineers to visualise and understand their results.

You will also play a key role in shaping technical direction — contributing to Technical Decision Records, collaborating with experienced engineers, and helping to drive the standards that keep our platform reliable, secure, and performant. This is a role for a builder who loves coding robust software as much as they love designing efficient data architectures.

What you will do
  • Contribute to the design and build scalable microservices, APIs, and data pipelines for high-dimensional simulation data across the ML lifecycle, working within established architectural patterns.
  • Build and maintain automated data ingestion and processing pipelines that power active learning loops, serving both no-code and pro-code users.
  • Implement and integrate data infrastructure components (Data Warehouses, Data Lakes, storage solutions) for simulation and deep learning workloads.
  • Build internal tools that enable BI dashboards and scientific data visualizations, making large datasets intuitive and accessible.
  • Own features end-to-end — from implementation through testing, deployment, and maintenance — writing clean, well-tested, secure code.
  • Contribute to reliability standards, performance monitoring, and quality of service metrics. Identify and help resolve performance bottlenecks.
  • Follow and contribute to API schema standards, security practices, and data access control patterns.
  • Participate in CI/CD pipeline maintenance, automated testing, and observability practices, including supporting zero-downtime deployments.
  • Participate in code reviews, knowledge sharing, and cross-functional collaboration with data scientists and researchers.
  • Contribute to Technical Decision Records and team discussions on tooling and architectural trade-offs.
What you bring to the table
  • A passion for the craft
    — you're driven by engineering excellence and committed to fostering that culture across the team.
  • Strong software engineering foundations
    — solid grasp of algorithms, data structures, and system design. You write clean, maintainable, testable code in Python with working knowledge of Golang or Rust.
  • Data platform exposure
    — experience building or contributing to data processing systems in production. Familiarity with tools like Databricks, Snowflake, or Big Query and concepts around Data Warehouses and Data Lakes.
  • API and service design
    — experience working with multi-service architectures, understanding schema design and data access patterns.
  • Security and reliability awareness
    — understanding of security fundamentals, monitoring, alerting, and quality of service in production systems.
  • CI/CD familiarity
    — practical experience with CI/CD pipelines and deployment workflows.
  • Safe code execution…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary