×
Register Here to Apply for Jobs or Post Jobs. X

Manager, ML Ops Infrastructure

Job in Dallas, Dallas County, Texas, 75215, USA
Listing for: WTS Paradigm LLC
Full Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    AI Engineer, Machine Learning/ ML Engineer, Cloud Computing
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below

Paradigm is a software company transforming the way that the residential, construction & building product industries operate across the globe. We are looking for a Manager, ML Ops Infrastructure to be part of revolutionizing these industries.

We're looking for a hands‑on technical leader to build and scale the ML Ops infrastructure that powers our AI capabilities in production. You'll oversee the end‑to‑end platform for deploying, serving, and operating ML models and AI agents, what we call our "agent factory": a repeatable framework that makes shipping AI‑powered features as reliable and routine as deploying any other service.

This role sits at the intersection of ML operations, platform engineering, and cloud infrastructure. You'll build the compute, orchestration, and deployment pipelines that take ML experiments from notebooks to production, creating self‑service tooling so data scientists and ML engineers can deploy with confidence and speed.

What You Will Do:

  • Build and lead a team of ML Ops engineers focused on production deployment frameworks for AI/ML systems including hiring, mentoring, and technical guidance.
  • Design and operate Kubernetes‑based infrastructure for ML workloads including model training, real‑time inference, LLM serving, and agent orchestration.
  • Create the core ML Ops platform: model versioning, deployment automation, registries, serving infrastructure, and CI/CD pipelines purpose‑built for ML and AI agent workflows.
  • Architect and manage GPU‑accelerated compute for training and inference, optimizing for both performance and cost through spot instances, auto‑scaling, and efficient resource allocation.
  • Build self‑service deployment tooling that enables data scientists and ML engineers to push models and agents to production without manual infrastructure work.
  • Build the infrastructure for agentic AI: tool‑calling, multi‑step workflows, orchestration frameworks, multi‑agent systems, and agent lifecycle management.
  • Implement production‑grade deployment strategies (canary, blue/green) with rollback capabilities, observability, drift detection, and performance monitoring.
  • Partner with data science, ML engineering, and SRE teams to align infrastructure with deployment requirements and reliability SLOs.
  • Drive continuous improvement in deployment velocity, cost efficiency, and operational maturity across the ML platform including evaluating and integrating tools like MLflow, Kubeflow, and emerging agent frameworks.

What You Need to Succeed:

  • Bachelor’s degree in Computer Science, Engineering, or a related field or equivalent experience.
  • 7+ years in infrastructure engineering, Dev Ops, or platform engineering, with at least 3 years focused on ML/AI infrastructure.
  • 1+ years of experience building and leading teams that operate production ML systems or demonstrated tech lead experience with direct influence over team processes and career growth.
  • Track record deploying and managing ML models in production. You understand the full lifecycle from training to serving to monitoring.
  • Hands‑on experience with GPU computing, model optimization, and ML‑specific infrastructure patterns.
  • Hands‑on experience with Kubernetes and container orchestration for ML workloads (Kubeflow, KServe, Ray, or similar).
  • Experience working with Azure cloud services such as Azure ML, Azure OpenAI, Azure Databricks, GPU‑accelerated compute (GPU VMs, AKS with GPU node pools).
  • Experience using infrastructure as code tools (Terraform or equivalent) with ML infrastructure patterns.
  • Python programming experience with fluency in ML frameworks (PyTorch, Tensor Flow) and LLM APIs (OpenAI, Anthropic, Azure OpenAI).
  • Experience with the modern AI/ML toolchain including model serving (vLLM, Triton, Torch Serve), ML Ops platforms (MLflow, Kubeflow, W&B), vector databases (pgvector, Azure AI Search), and agent orchestration frameworks. Familiarity with RAG architectures, fine‑tuning workflows, and embedding pipelines at scale.
  • You are a bridge‑builder who translates fluently between ML practitioners and infrastructure teams.
  • You are a systems thinker who balances performance, cost, and reliability while building for scale.
  • You are collaborative, curious, and driven to enable teams to ship AI capabilities faster than they thought possible.

Ready to Join? Apply now at
#Paradigm

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary