×
Register Here to Apply for Jobs or Post Jobs. X

AI DevOps and Cloud Infrastructure Engineer

Job in Fort Lauderdale, Broward County, Florida, 33336, USA
Listing for: Crowe
Full Time position
Listed on 2026-01-10
Job specializations:
  • IT/Tech
    Systems Engineer, AI Engineer, Data Engineer
  • Engineering
    Systems Engineer, AI Engineer, Data Engineer
Job Description & How to Apply Below

AI Dev Ops and Cloud Infrastructure Engineer

Join to apply for the AI Dev Ops and Cloud Infrastructure Engineer role at Crowe

About the Role

The AI Dev Ops and Cloud Infrastructure Engineer I (Senior Staff) designs, builds, and operates scalable, secure, and highly automated cloud environments that support the training, deployment, monitoring, and continuous delivery of AI and machine learning systems. The engineer collaborates closely with AI engineering, MLOps, data engineering, platform, and security teams to define infrastructure requirements, improve observability, and support the performance demands of predictive and generative AI workloads.

Responsibilities
  • Architecting and maintaining cloud infrastructure for AI model training, inference services, and distributed compute workloads.
  • Implementing infrastructure-as-code (IaC) to automate provisioning, configuration, scaling, and lifecycle management of cloud resources.
  • Designing and operating CI/CD pipelines for automated model training, testing, and deployment of AI-enabled applications.
  • Optimizing Kubernetes clusters, GPU utilization, and compute scaling strategies to balance performance, reliability, and cost.
  • Integrating AI models, inference endpoints, and data pipelines into cloud-native platforms.
  • Developing monitoring, logging, alerting, and observability solutions using modern telemetry and tracing tools.
  • Troubleshooting issues across networking, containers, compute, storage, and model-serving layers.
  • Leading performance benchmarking, load testing, and reliability validation for AI systems.
  • Documenting infrastructure architectures, operational runbooks, and engineering standards.
  • Supporting automation for dataset ingestion, model versioning, artifact management, and ML testing.
  • Ensuring compliance with cloud security, identity management, encryption, and responsible AI guidelines.
  • Partnering with security teams to implement secure networking, IAM policies, and secrets management.
  • Providing technical mentorship, design reviews, and cloud best-practice guidance to junior engineers.
  • Evaluating new cloud services, platform capabilities, and AI infrastructure tooling for adoption.
Qualifications
  • 4+ years of experience in Dev Ops, cloud engineering, platform engineering, or infrastructure engineering.
  • Strong proficiency with Kubernetes, Docker, and cloud orchestration platforms.
  • Deep experience with CI/CD systems and deployment automation.
  • Demonstrated ability to debug distributed systems and cloud networking issues.
  • Proficiency in Python, Bash, or other automation/scripting languages.
  • Strong communication skills and ability to collaborate across engineering and security teams.
  • Willingness to travel occasionally for cross-functional planning and collaboration.
Preferred Qualifications
  • Bachelor’s degree in Computer Science, Cloud Engineering, Information Systems, or a related technical field, or equivalent experience.
  • Master’s degree in a technical discipline.
  • Experience enabling ML or AI workloads at scale in production environments.
  • Cloud and platform certifications, including Azure (AZ‑900, AZ‑104, AZ‑305, AZ‑700, AI‑102) or equivalent AWS/GCP certifications.
  • Advanced experience with AWS (e.g., EKS, EC2, IAM, Lambda, Sage Maker) and/or Azure (e.g., AKS, VMSS, Azure ML).
  • Experience with GPU orchestration and scaling strategies for AI workloads.
  • Expertise with Terraform or other infrastructure-as-code frameworks.
  • Hands‑on experience with observability stacks such as Prometheus, Grafana, Cloud Watch, and Open Telemetry.
  • Experience deploying and operating generative AI workloads, including LLM inference autoscaling and RAG architectures.
  • Familiarity with vector database hosting (e.g., Pinecone, Weaviate, FAISS) and model‑serving frameworks (e.g., Hugging Face TGI, vLLM, custom inference containers).
  • Experience building CI/CD pipelines for LLM fine‑tuning workflows (e.g., LoRA, QLoRA, PEFT) and monitoring generative AI performance metrics such as latency, throughput, and hallucination rates.
Our Values

We expect the candidate to uphold Crowe’s values of Care, Trust, Courage, and Stewardship. These values define who we are and guide ethical and integrity‑focused behavior at…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary