More jobs:
Job Description & How to Apply Below
The Role
The AI Infrastructure Engineer (L3) provides advanced engineering and architectural expertise for high‑performance AI and ML infrastructure. This role focuses on building, optimizing, and scaling GPU/accelerator environments and distributed systems for large‑scale training and inference workloads.
Competency Focus: High‑performance computing (HPC), distributed systems, Kubernetes, GPU orchestration, cloud optimization
:
Deploy, configure, and manage GPU and AI accelerator platforms (NVIDIA A100/H100/L40, AMD Instinct, TPU).
Troubleshot GPU hardware and software issues, including failures, thermal throttling, PCIe/NVLink topology, and driver conflicts.
Install, upgrade, and maintain GPU software stacks, including drivers, CUDA, cuDNN, Tensor
RT, and firmware.
Perform capacity planning and resource optimization for AI training, fine‑tuning, and inference workloads.
Optimize Linux systems (Ubuntu, RHEL, Rocky) for AI/HPC workloads through NUMA, kernel, and clock tuning.
Manage distributed and high‑performance storage systems, including BeeGFS, Lustre, Ceph, and high‑throughput NFS.
Operate high‑bandwidth, low‑latency networks, including Infini Band, RoCE, RDMA, and NVLink.
Administer Kubernetes GPU clusters, leveraging NVIDIA GPU Operator, device plugins, MIG, and node feature discovery.
Support AI and HPC orchestration platforms, including Kubeflow, Ray, MLflow, and Slurm/PBS.
Configure and manage GPU scheduling and sharing strategies, such as node pools, quotas, job queues, and fair‑share policies.
Optimize distributed training workflows using NCCL, PyTorch Distributed, Horovod, and Deep Speed.
Operate and tune LLM and inference runtimes, including vLLM, Triton Inference Server, and Tensor
RT‑LLM.
Monitor and tune GPU utilization, memory allocation, and container-level performance.
Automate cluster provisioning and operations using Terraform, Helm, Customize, and Git Ops (ArgoCD/Flux).
Build automation for GPU diagnostics, node onboarding, and model deployment workflows.
Implement observability and telemetry using Prometheus, Grafana, NVIDIA DCGM, and Open Telemetry.
Lead deep‑dive root cause analysis for GPU, network, storage, and orchestration issues.
Provide L3 support and work with L2/L1 teams for escalations.
Drive production readiness, patching, hotfix rollout, and reliability improvements across AI infrastructure.
Troubleshoot & escalation for complex platform failures
Deep debugging of: NCCL hangs, GPU fabric issues and co-ordinate with OEM and support vendors on critical issues
Review RCA, architecture documents, and change plans
Act as technical advisor to leadership and customers
Qualifications & Experience
Bachelor’s degree in computer science, Engineering, Information Technology, or related field
8–12 years of overall infrastructure or platform engineering experience
4–6 years of specialized experience supporting AI/ML workloads
Demonstrated experience in large‑scale GPU/accelerated computing and distributed systems
Strong experience in Kubernetes, containerization, and orchestration tools
Understanding of AI workload and MLOps
Certifications Required
NVIDIA Certified Associate – AI Infrastructure
NVIDIA NPN Certification
NVIDIA Base Command Manager certification
AWS Solutions Architect Associate
CKA – Certified Kubernetes Administrator
CKAD – Certified Kubernetes Application Developer
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
Search for further Jobs Here:
×