×
Register Here to Apply for Jobs or Post Jobs. X

AWS Cloud Operations Engineer

Remote / Online - Candidates ideally in
Kentucky, USA
Listing for: Pharmacy Data Management, Inc.
Remote/Work from Home position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    Cloud Computing
Job Description & How to Apply Below

PDMI is looking to add an AWS Cloud Operations Engineer to our team! As an AWS Cloud Operations Engineer, you will deploy, manage, and optimize cloud infrastructure and applications in AWS environments. This role ensures seamless, secure, and efficient deployment of infrastructure and application code, working closely with internal IT and development teams to maintain a highly available and scalable cloud ecosystem.

Since 1984, PDMI has provided pharmacy data processing and other flexible, scalable solutions to help our clients meet their business objectives. We offer transparent, pass-through pharmacy processing and other services for private label Pharmacy Benefit Managers (PBMs), vertically integrated health plans and hospital systems. In addition to Pharmacy Benefit Administrative Services, we offer 340B Administration, Hospice and Long-Term Care Services.

Why Join Us:
  • Best

    Employer:

    PDMI was voted Best Employer in Ohio for the 5th consecutive year in 2025!
  • Meaningful Work:Contribute to improving healthcare quality and efficiency.
  • Collaborative Environment:Work with passionate professionals who share your drive.
  • Exciting Challenges:Every day brings new opportunities to excel.
  • Flexible Work:Fully remote opportunity with a company that cares.
Responsibilities:
  • Own and manage all production AWS environments across multi-account Control Tower architecture (Shared, Prod, Dev, and specialized workloads)
  • Build, maintain, and troubleshoot CI/CD pipelines using Git Hub Actions, AWS Code Pipeline, Code Build, and Code Deploy
  • Deploy and operate containerized applications using Amazon EKS (Kubernetes) and Amazon ECS / Fargate
  • Support microservices architecture, service mesh, sidecars, and container security best practices
  • Automate infrastructure provisioning using Terraform, Cloud Formation, and AWS-native tools (SSM, Lambda, Step Functions)
  • Manage and optimize core AWS services, including:
    • Compute: EC2, ASGs, Launch Templates
    • Networking: VPC, TGW, Security Groups, Route
      53, VPN/Direct Connect, NAT Gateways, ALB/NLB
    • Storage: S3, EBS, EFS
    • Databases: RDS (MySQL, SQL Server), Aurora, DynamoDB
    • Monitoring & logging:
      Cloud Watch, AWS Config, Guard Duty, Inspector;
      Datadog for monitoring, alerting, dashboards, and APM
  • Implement and enforce security, governance, and best practices across IAM, KMS, SSM Parameter Store, Secrets Manager
  • Support and troubleshoot production workloads, including performance bottlenecks, networking issues, and container failures
  • Work closely with the development team, but maintain ownership of production infrastructure, deployments, and reliability
  • Participate in on-call rotation for production-critical services
  • Utilize Kafka / Confluent Cloud, CI platform integrations, and microservices observability
What We're Looking For:
  • 3+ years of hands-on AWS engineering or Dev Ops experience (production-grade)
  • Strong experience building and operating CI/CD pipelines using:
    • Git Hub Actions (required)
    • AWS Code Pipeline / Code Build / Code Deploy
  • Strong experience with EKS(Kubernetes fundamentals, deployments, autoscaling, node groups/Fargate, networking)
  • Experience with ECS/Fargate, container registries, image lifecycle management, and security
  • Hands-on experience with:
    • Terraform (preferred) or Cloud Formation
    • EC2, VPC, routing, load balancing, security groups
    • S3, IAM, KMS, Secrets Manager
    • Lambda, Step Functions, Event Bridge
    • RDS/Aurora or DynamoDB
  • Understanding of multi-account AWS environments, SSO/Identity, and production governance
  • Ability to diagnose complex AWS issues (networking, IAM, CI/CD, K8s failures)
  • Scripting ability in Python, Bash, or Power Shell (good to have)
  • Strong communication skills and the ability to work closely with developers while owning production reliability
  • Experience with Datadog (Logging, APM, RUM, synthetic tests)
  • Experience with Confluent Kafka or event-streaming architectures
  • Familiarity with modern IaC patterns (modules, Git Ops, reusable pipelines)
  • Familiarity with Git Hub administration, branch protection, runners, or organization-level Dev Ops processes
  • Experience in healthcare, compliance-heavy, or regulated environments
#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary