More jobs:
SC Cleared HPC Containerization Expert
Job in
Derby, Derbyshire, DE1, England, UK
Listed on 2026-02-03
Listing for:
Click Digital
Full Time
position Listed on 2026-02-03
Job specializations:
-
IT/Tech
Systems Engineer, Cloud Computing
Job Description & How to Apply Below
Role Overview
This role provides high-level consultancy and first hand engineering expertise to modernise High-Performance Computing (HPC) environments with a focus on containerisation of HPC and HPC/ML workloads, cluster optimization, and MPI-aware workflows.
Responsibilities include assessing the existing HPC infrastructure, advising on best-practice for container strategies, and delivering technical recommendations based on real workloads
Key Responsibilities
The ideal candidate will collaborate with the customer to assess existing HPC infrastructure and workloads through various customer workshops and interviews, to identify:
1. HPC Environment Assessment & Optimization
* Conduct rapid reviews of existing HPC infrastructure, including Beowulf clusters, Slurm job schedulers, workflow execution patterns, and key use cases.
* Analyze current HPC/ML workloads and identify performance constraints, containerization opportunities, and runtime optimization options.
1. HPC Containerization Strategy
* Produce a strategy for MPI-optimized container images using tools such as Apptainer/Singularity (SIF), OpenMPI, Podman, and Buildah.
* As part of the report's findings outline repeatable, secure build pipelines for HPC workloads.
* Identify in the report opportunities to align container platforms with HPC orchestration requirements (batch, interactive, scientific workflows).
1. Technical Workshops, Stakeholder Engagement
* Lead technical discussions, workshops, and training sessions to upskill engineering and research teams on containerization and HPC best practices.
* Produce concise written reports outlining the recommendations and findings, use cases, future strategy, HPC market view along with, next steps, and architectural patterns.
1. HPC Tools, Runtimes & Ecosystem Integration
* Have a good understanding of Slurm scheduling, MPI job patterns, Jupyter Hub/Jupyter Lab for data-science workflows, GPU scheduling tools such as Run:
AI, and SDLC practices tailored to research/HPC environments.
Essential Skills & Experience
* Proven background in high-performance computing, including hands-on use of Beowulf-style cluster environments.
* Strong practical knowledge of Slurm job scheduling and MPI job optimization.
* Demonstrated experience with MPI-aware container technologies (Apptainer/SIF, OpenMPI, Podman, Buildah).
* Ability to run structured interviews, technical workshops, and cross-team engineering sessions.
* Experience supporting typical HPC, ML and scientific workloads.
* Desirable Experience
* Knowledge of CFD software (eg, OpenFOAM).
* Familiarity with Kubernetes or Singularity as an HPC runtime, GPU slicing, and image lifecycle governance.
* Experience contributing written outputs such as market assessments of HPC container technologies
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×