More jobs:
Machine Learning Engineer II
Remote / Online - Candidates ideally in
Indiana, Indiana County, Pennsylvania, 15705, USA
Listed on 2026-01-10
Indiana, Indiana County, Pennsylvania, 15705, USA
Listing for:
Conde Nast
Remote/Work from Home
position Listed on 2026-01-10
Job specializations:
-
IT/Tech
Machine Learning/ ML Engineer, AI Engineer
Job Description & How to Apply Below
Condé Nast is a global media company producing the highest quality content with a footprint of more than 1 billion consumers in 32 territories through print, digital, video and social platforms. The company’s portfolio includes many of the world’s most respected and influential media properties including Vogue, Vanity Fair, Glamour, Self, GQ, The New Yorker, Condé Nast Traveler/Traveller, Allure, AD, Bon Appétit and Wired, among others.#
** Job Description
*
* Location:
Bengaluru, KA
**** About
The Role :
**** Condé Nast is seeking a motivated and skilled Machine Learning Engineer I to support the productionization of machine learning projects in Databricks or AWS environments for the Data Science team.
This role is ideal for an engineer with a strong foundation in software development, data engineering, and machine learning, who enjoys transforming data science prototypes into scalable, reliable production pipelines.
*
* Note:
* This role focuses on deploying, optimizing, and operating ML models rather than building or researching new machine learning models.
******* Primary Responsibilities
***** Design, build, and operate scalable, highly available ML systems for batch and real-time inference.
* Own the end-to-end production lifecycle of ML services, including deployment, monitoring, incident response, and performance optimization.
* Build and maintain AWS-native ML architectures using services such as EKS, Sage Maker, API Gateway, Lambda, and Dynamo
DB.
* Develop and deploy low-latency ML inference services using FastAPI/Flask or gRPC, running on Kubernetes (EKS).
* Design autoscaling strategies (HPA/Karpenter), rollout mechanisms, traffic routing, and resource tuning for ML workloads.
* Engineer near-real-time data and inference pipelines processing large volumes of events and requests.
* Collaborate closely with Data Scientists to translate prototypes into robust, production-ready systems.
* Implement and maintain CI/CD pipelines for ML services and workflows using Git Hub Actions and Infrastructure-as-Code.
* Improve observability, logging, alerting, and SLA/SLO adherence for critical ML systems.
* Follow agile engineering practices with a strong focus on code quality, testing, and incremental delivery.
**** Desired
Skills & Qualifications
***** 4-7+ years of experience in Machine Learning Engineering, MLOps, or Backend Engineering.
* Strong foundation in system design, distributed systems, and API-based service architectures.
* Proven experience deploying and operating production-grade ML systems on AWS.
* Strong proficiency in Python, with experience integrating ML frameworks such as PyTorch, Tensor Flow, scikit-learn, and working with data processing libraries like Pandas, Num Py, and PySpark.
* Solid experience with AWS services, including (but not limited to): EC2, S3, API Gateway, Lambda, IAM, VPC Networking, Dynamo
DB.
* Hands-on experience building and operating containerized microservices using Docker and Kubernetes (preferably EKS).
* Experience building and deploying ML inference services, using:
* FastAPI / Flask / gRPC
* Torch Serve, Tensor Flow Serving, Triton, vLLM, or custom inference services
* Strong understanding of data structures, data modeling, and software architecture.
* Experience designing and managing CI/CD pipelines and Infrastructure-as-Code (Terraform) for ML systems.
* Strong debugging, performance optimization, and production troubleshooting skills.
* Excellent communication skills and ability to collaborate effectively across teams.
* Outstanding analytical and problem-solving skills.
* Undergraduate or Postgraduate degree in Computer Science or a related discipline.
**** Preferred Qualifications
***** Experience with workflow orchestration and ML lifecycle tools such as Airflow, Astronomer, MLflow, or Kubeflow.
* Experience working with Databricks, Amazon Sage Maker, or Spark-based ML pipelines in production environments.
* Familiarity with ML observability, monitoring, or feature management (e.g., model performance tracking, drift detection, feature stores).
* Experience designing or integrating vector search, embedding-based retrieval, or RAG-style systems in production is a plus.
* Prior…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×