×
Register Here to Apply for Jobs or Post Jobs. X

ML Ops Engineer

Job in Scottsdale, Maricopa County, Arizona, 85261, USA
Listing for: Prosum
Full Time position
Listed on 2026-02-28
Job specializations:
  • IT/Tech
    Data Engineer, Machine Learning/ ML Engineer, Cloud Computing, AI Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Location:

Scottsdale, AZ (Hybrid work Schedule)

Term: Full-time/Direct Hire

Job Summary:

We are seeking a skilled ML Ops Engineer with hands-on experience in AWS, Python, SQL, and (Spark or PySpark) to help deploy, monitor, and maintain machine learning models in production environments. The ideal candidate will have a strong background in putting models into production, automating ML workflows, and collaborating closely with data scientists and engineers to ensure scalable, reliable, and efficient ML systems.

Key Responsibilities:

  • Collaborate with data scientists and ML engineers to deploy machine learning models into production.
  • Design, implement, and maintain robust CI/CD pipelines for model training, validation, and deployment.
  • Monitor and maintain models in production, ensuring optimal performance and reliability.
  • Manage model versioning, rollback strategies, and updates across environments.
  • Build and optimize data processing pipelines using Spark or PySpark.
  • Write production-ready code in Python for ML workflows and automation tasks.
  • Develop and maintain scalable data workflows using SQL and AWS-native data tools.
  • Utilize AWS services (such as S3, Sage Maker, Lambda, ECS, EMR, Glue) for model deployment and data engineering tasks.
  • Work closely with cross-functional teams to ensure models meet business and performance requirements.
  • Troubleshoot and resolve issues in ML pipelines and production environments.

Required Qualifications:

  • 2+ years of experience in ML Ops, Data Engineering, or Machine Learning Engineering roles.
  • Proven experience putting models into production and maintaining models post-deployment.
  • Proficient in Python, including ML libraries and scripting.
  • Strong expertise in SQL and working with large datasets.
  • Hands-on experience with Spark or PySpark in distributed data processing environments.
  • Solid understanding and experience with AWS cloud services relevant to ML Ops.
  • Experience with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes, Airflow).
  • Familiarity with monitoring tools and logging for model observability and alerting.
  • Strong problem-solving skills and ability to work independently or as part of a team.

Preferred Qualifications:

  • Experience with Amazon Sage Maker or other ML model deployment platforms.
  • Familiarity with ML lifecycle tools (MLflow, TFX, Kubeflow, etc.).
  • Experience working in agile environments and using Dev Ops best practices.

“This position does not offer sponsorship. Candidates must be legally authorized to work in the United States without sponsorship now or in the future."

#J-18808-Ljbffr
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary