×
Register Here to Apply for Jobs or Post Jobs. X

GPU Cloud Platform Engineer

Job in Mount Pearl, St. Johns, Newfoundland / NL, Canada
Listing for: Yotta Labs
Full Time position
Listed on 2026-03-03
Job specializations:
  • IT/Tech
    Systems Engineer, AI Engineer
Job Description & How to Apply Below
Location: Mount Pearl

Join to apply for the GPU Cloud Platform Engineer role at Yotta Labs
.

About Yotta Labs

Yotta Labs is pioneering the development of a Decentralized Operating System (DeOS) for AI workload orchestration at a planetary scale. Our mission is to democratize access to AI resources by aggregating geo-distributed GPUs, enabling high-performance computing for AI training and inference on a wide spectrum of hardware—from commodity to high-end GPUs. Our platform supports major large language models (LLMs) and offers customizable solutions for new models, facilitating elastic and efficient AI development.

Role Overview

We are seeking a GPU Cloud Platform Engineer to join our core infrastructure team and help build the next-generation AI compute cloud. In this role, you will design, deploy, and operate large-scale, multi-cluster GPU infrastructure across data centers and cloud environments. You will be responsible for ensuring high availability, performance, and efficiency of containerized AI workloads—ranging from LLMs to generative models—deployed in Kubernetes-based GPU clusters.

Responsibilities
  • Build and operate large-scale, high-performance GPU clusters; ensure stable operation of compute, network, and storage systems; monitor and troubleshoot online issues.
  • Conduct performance testing and evaluation of multi-node GPU clusters using standard benchmarking tools to identify and resolve performance bottlenecks.
  • Deploy and orchestrate large models (e.g., LLMs, video generation models) across multi-cluster environments using Kubernetes; implement elastic scaling and cross-cluster load balancing to ensure efficient service response under high concurrency for global users.
  • Participate in the design, development, and iteration of GPU cluster scheduling and optimization systems. Define and lead Kubernetes multi-cluster configuration standards; optimize scheduling strategies (e.g., node affinity, taints/tole rations) to improve GPU resource utilization.
  • Build a unified multi-cluster management and monitoring system to support cross-region resource monitoring, traffic scheduling, and fault failover. Collect key metrics such as GPU memory usage, QPS, and response latency in real time; configure alert mechanisms.
  • Coordinate with IDC providers for planning and deploying large-scale GPU clusters, networks, and storage infrastructure to support internal cloud platforms and external customer needs.
Qualifications
  • Bachelor's degree or higher in Computer Science, Software Engineering, Electronic Engineering, or related fields; 3+ years of experience in system engineering or Dev Ops.
  • 5+ years of experience in cloud-native development or AI engineering, with at least 2 years of hands-on experience in Kubernetes multi-cluster management and orchestration.
  • Familiarity with the Kubernetes ecosystem; hands-on experience with tools such as kubectl, Helm, and expertise in multi-cluster deployment, upgrade, scaling, and disaster recovery.
  • Proficient in Docker and containerization technologies; knowledge of image management and cross-cluster distribution.
  • Experience with monitoring tools such as Prometheus and Grafana; has practical experience in GPU fault monitoring and alerting.
  • Hands-on experience with cloud platforms such as AWS, GCP, or Azure; understanding of cloud-native multi-cluster architecture.
  • Experience with cluster management tools such as Ray, Slurm, Kube Sphere, Rancher, Karmada is a plus.
  • Familiarity with distributed file systems such as NFS, Juice

    FS, CephFS, or Lustre; ability to diagnose and resolve performance bottlenecks.
  • Understanding of high-performance communication protocols such as IB, RoCE, NVLink, and PCIe.
  • Strong communication skills, self-motivation, and team collaboration.
Preferred Experience
  • Experience in developing and operating MaaS platforms or large-scale model inference clusters. Proven track record of leading multi-cluster system development or performance optimization projects.
  • Proficiency in CUDA programming and the NCCL communication library; understanding of high-performance GPUs like H100.
  • Ability to develop standardized inference APIs (RESTful/gRPC) and automation tools using Golang or Python.
  • Hands-on…
Note that applications are not being accepted from your jurisdiction for this job currently via this jobsite. Candidate preferences are the decision of the Employer or Recruiting Agent, and are controlled by them alone.
To Search, View & Apply for jobs on this site that accept applications from your location or country, tap here to make a Search:
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary