Prem AI Infra Engineer | Kubernetes & GitOps
Our client is a young high-tech company incorporated in the heart of one of the world's fastest-growing
tech hubs—Dubai, UAE.
As the exclusive software partner to one of the world's largest ODMs in the networking equipment space, they develop the Network Operating Systems that power critical data centre and telecom routing & switching infrastructure. Building on this foundation, they have recently launched an AI division focused on designing our own chips to accelerate inference and
training workloads.
What sets them apart is their unique position at the centre of a historic development: our ODM partner
is establishing the first networking equipment factory of its kind in the GCC region, and they are the
software engine driving this groundbreaking initiative.
They are not just building technology—they are building a true networking vendor that serves regional interests while meeting the growing demandfor networking equipment across the MENA region and further.
Their long-term vision extends beyond products to people: creating a thriving ecosystem forembedded systems and ASIC design talent that will produce generations of world-classprofessionals, establishing our region as a global centre of excellence for Enterprise Computeinnovation.
As a rapidly growing company at the forefront of AI hardware innovation, they are constantly seeking talented and motivated individuals to join their team. We offer a dynamic and challenging work environment, with opportunities to make a significant impact on the future of AI technology.
Your Mission
Own the end-to-end design and operation of our on-premise infrastructure for AI and enterpriseworkloads—built as code, automated, observable, and secure. You will architect and run Kubernetes clusters for training/inference, manage servers, networks, and core services, andenable developers with reliable CI/CD and platform tooling. This is where minutes, time-to
- recovery and cost-per-job directly impact AI velocity at scale.
Responsibilities
Design and operate on-prem infrastructure as code: author reusable Terraform/Ansible/Helm
modules; build Git Ops workflows (e.g., Argo CD) for repeatable, audited changes acrossenvironments.
Build and run Kubernetes for AI: configure multi-tenant GPU clusters (MIG/GPUDirect RDMA,
NVIDIA device plugins/DCGM), scheduling/quotas, HPA/Cluster Autoscaler (where applicable),
and workload isolation.
Administer servers, networks, and core services: OS lifecycle (Linux), identity/SSO
(Keycloak/LDAP), secrets (Vault), DNS/DHCP/NTP, artifact registries, and internal package
mirrors.
Provide storage for AI pipelines: integrate and operate high-bandwidth/low-latency storage,
tune for dataset staging and checkpointing patterns.
Enable CI/CD: partner with developers to design fast, reproducible pipelines (Git Lab CI/Git Hub
Actions), caching and runners on GPU/CPU nodes, artifact provenance (SBOM, SLSA).
You'll Collaborate With Platform and ML engineers running training/inference at scale, silicon and systems teams
integrating hardware in the lab, security engineers safeguarding credentials and supply chain,
application developers delivering services via CI/CD, and site ops supporting data centre
deployments — together, we turn infrastructure into a product that accelerates the business.
Minimum Qualifications
5+ years in Dev Ops/SRE/Platform Engineering with hands-on ownership of on-prem
environments.
Proven experience operating Kubernetes in production (multi-tenant RBAC, networking/CNI,
storage, ingress, monitoring).
Proficiency with IaC and automation (Terraform, Ansible, Helm; Git Ops with Argo CD/Flux).
Strong Linux administration, scripting (Bash/Python), and troubleshooting across the stack
(compute, network, storage).
CI/CD expertise (Git Lab CI/Git Hub Actions), container build security (SBOM, image signing), and
artifact management.
Solid networking fundamentals (L2/L3, routing, BGP, VLANs, EVPN/VXLAN, load balancing,
TLS/mTLS).
Experience implementing observability (Prometheus/Grafana, logs, tracing) and running incident
response.
Preferred (Nice-to-Haves)
GPU cluster operations for AI (NVIDIA drivers/operator, DCGM, MIG, GPUDirect RDMA, Slurm
integration).
Storage for…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).