×
Register Here to Apply for Jobs or Post Jobs. X

AI DevOps and Cloud Infrastructure Engineer

Job in Cleveland, Cuyahoga County, Ohio, 44101, USA
Listing for: Crowe
Full Time position
Listed on 2026-01-13
Job specializations:
  • IT/Tech
    Systems Engineer, Cloud Computing, AI Engineer, Data Engineer
Job Description & How to Apply Below

AI Dev Ops and Cloud Infrastructure Engineer

Join to apply for the AI Dev Ops and Cloud Infrastructure Engineer role at Crowe

Job Description:

The AI Dev Ops and Cloud Infrastructure Engineer designs, builds, and operates scalable, secure, and highly automated cloud environments that support the training, deployment, monitoring, and continuous delivery of AI and machine learning systems. This role serves as a subject-matter expert in infrastructure automation, distributed compute orchestration, and cloud platform operations, ensuring AI workloads perform reliably across development, staging, and production environments.

About

Crowe AI Transformation

Everything we do is about making the future of human work more purposeful. We leverage state-of-the-art technologies, modern architecture, and industry experts to create AI-powered solutions that transform how our clients do business. The AI Transformation team builds on Crowe’s AI foundation, combining Generative AI, Machine Learning, and Software Engineering to empower Crowe clients to transform their business models through AI.

Responsibilities
  • Architecting and maintaining cloud infrastructure for AI model training, inference services, and distributed compute workloads.
  • Implementing infrastructure-as-code (IaC) to automate provisioning, configuration, scaling, and lifecycle management of cloud resources.
  • Designing and operating CI/CD pipelines for automated model training, testing, and deployment of AI-enabled applications.
  • Optimizing Kubernetes clusters, GPU utilization, and compute scaling strategies to balance performance, reliability, and cost.
  • Integrating AI models, inference endpoints, and data pipelines into cloud-native platforms.
  • Developing monitoring, logging, alerting, and observability solutions using modern telemetry and tracing tools.
  • Troubleshooting issues across networking, containers, compute, storage, and model-serving layers.
  • Leading performance benchmarking, load testing, and reliability validation for AI systems.
  • Documenting infrastructure architectures, operational runbooks, and engineering standards.
  • Supporting automation for dataset ingestion, model versioning, artifact management, and ML testing.
  • Ensuring compliance with cloud security, identity management, encryption, and responsible AI guidelines.
  • Partnering with security teams to implement secure networking, IAM policies, and secrets management.
  • Providing technical mentorship, design reviews, and cloud best-practice guidance to junior engineers.
  • Evaluating new cloud services, platform capabilities, and AI infrastructure tooling for adoption.
Qualifications
  • 4+ years of experience in Dev Ops, cloud engineering, platform engineering, or infrastructure engineering.
  • Strong proficiency with Kubernetes, Docker, and cloud orchestration platforms.
  • Deep experience with CI/CD systems and deployment automation.
  • Demonstrated ability to debug distributed systems and cloud networking issues.
  • Proficiency in Python, Bash, or other automation/scripting languages.
  • Strong communication skills and ability to collaborate across engineering and security teams.
  • Willingness to travel occasionally for cross-functional planning and collaboration.
Preferred Qualifications
  • Bachelor’s degree in Computer Science, Cloud Engineering, Information Systems, or a related technical field, or equivalent experience.
  • Master’s degree in a technical discipline.
  • Experience enabling ML or AI workloads at scale in production environments.
  • Cloud and platform certifications, including Azure (AZ-900, AZ-104, AZ-305, AZ-700, AI-102) or equivalent AWS/GCP certifications.
  • Advanced experience with AWS (EKS, EC2, IAM, Lambda, Sage Maker) and/or Azure (AKS, VMSS, Azure ML).
  • Experience with GPU orchestration and scaling strategies for AI workloads.
  • Expertise with Terraform or other infrastructure-as-code frameworks.
  • Hands-on experience with observability stacks such as Prometheus, Grafana, Cloud Watch, and Open Telemetry.
  • Experience deploying and operating generative AI workloads, including LLM inference autoscaling and RAG architectures.
  • Familiarity with vector database hosting (e.g., Pinecone, Weaviate, FAISS) and model-serving frameworks (e.g., Hugging…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary