×
Register Here to Apply for Jobs or Post Jobs. X

ML Ops Developer III

Job in Austin, Travis County, Texas, 78716, USA
Listing for: Netspend Corporation
Full Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    AI Engineer, Cloud Computing, Machine Learning/ ML Engineer
Salary/Wage Range or Industry Benchmark: 60000 - 80000 USD Yearly USD 60000.00 80000.00 YEAR
Job Description & How to Apply Below

About the Company:

Ouro is a global, vertically-integrated financial services and technology company dedicated to the delivery of innovative financial empowerment solutions to consumers worldwide. Ouro’s financial products and services span prepaid, debit, cross-border payments, and loyalty solutions for consumers and enterprise partners.

Ouro's flagship product Netspend provides prepaid and debit account solutions that connect customers with secure, convenient access to global payment networks so they can manage their money and make everyday purchases. With a nationwide U.S. retail network, customers can purchase and reload Netspend products at 130,000 reload points and over 100,000 distributing locations.

Since Ouro's founding in 1999 by industry pioneers Roy and Bertrand Sosa, Ouro products have processed billions of dollars in transaction volume and served millions of customers worldwide. The company is headquartered in Austin, Texas with regional offices around the world. Learn more at

Role Summary

The MLOps Developer will join a centralized MLOps Engineering team responsible for product ionizing machine learning and generative AI workloads at enterprise scale. The role will drive the design, automation, deployment, observability, and governance of ML and LLM platforms using AWS Sage Maker and Amazon Bedrock.

This position requires close collaboration with Data Science (DS) teams to support model development, training, validation, and deployment into production. You will also be responsible for evolving and optimizing ML workflows, continuously improving automation, reliability, and security to meet emerging business and platform requirements.

Key Responsibilities
  • Architect, deploy, and operate development and production MLOps platforms on AWS (Sage Maker, Bedrock)

  • Build and maintain CI/CD pipelines for ML model training and deployment

  • Implement Infrastructure as Code (IaC) using Terraform

  • Manage AWS cloud components, including IAM, VPC, EKS, Lambda, security, networking, monitoring, and compliance

  • Automate the end-to-end ML model lifecycle (training, deployment, endpoints, monitoring, and failure detection)

  • Configure and manage cloud observability (logging, alerts, dashboards – Cloud Watch and monitoring tools)

  • Enable secure LLM onboarding, prompt orchestration, and governance using Amazon Bedrock

  • Ensure platform reliability, scalability, security, and regulatory compliance

  • Partner with DS and Engineering to support ML model productionization and release governance

Required Skills
  • Advanced proficiency in Python

  • Strong experience in Terraform (IaC) for AWS infrastructure automation

  • Hands-on knowledge of CI/CD, Dev Ops, and deployment governance

  • Experience with AWS ML/AI ecosystem:
    Sage Maker, Bedrock, IAM, VPC, EKS, Lambda, Cloud Watch, cloud security, and monitoring

  • Practical experience with ML model deployment, endpoints, and production support

  • Solid understanding of cloud security, networking, logging, and observability

  • Knowledge of MLOps best practices and ML system design

Preferred Qualifications
  • 7+ years of experience in AI/ML engineering or platform roles

  • Experience with AWS Sage Maker endpoints, pipelines, and model hosting

  • Experience integrating, orchestrating, or governing LLM workloads using Amazon Bedrock

  • Prior experience with ML deployments in production environments

  • Familiarity with Terraform modules and EKS-based deployments

  • Knowledge of ML observability, monitoring, and failure detection

  • Experience in Fin Tech or enterprise data platforms is an advantage

Why This Role Matters

This is a mission-critical engineering role that ensures ML and LLM workloads are deployed securely, reliably, and efficiently at enterprise scale. The role supports long-term AI platform evolution and accelerates scalable ML and generative AI adoption across the organization.

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary