×
Register Here to Apply for Jobs or Post Jobs. X

Machine Learning Engineer, Oncology foundation Model

Job in Chicago, Cook County, Illinois, 60290, USA
Listing for: Tempus, Inc.
Full Time position
Listed on 2026-01-12
Job specializations:
  • IT/Tech
    Data Engineer, Machine Learning/ ML Engineer, AI Engineer
Job Description & How to Apply Below
Position: Staff Machine Learning Engineer, Oncology foundation Model

Staff Machine Learning Engineer, Oncology Foundation Model

Locations: Chicago, New York City, Remote (Illinois), Redwood City.
Employment Type: Full-time.

Recent advancements in underlying technology have finally made it possible for AI to impact clinical care in a meaningful way. Tempus' proprietary platform connects an entire ecosystem of real‑world evidence to deliver real‑time, actionable insights to physicians, providing critical information about the right treatments for the right patients, at the right time. We are seeking an experienced and highly skilled Staff Machine Learning Engineer with deep expertise in large‑scale multimodal model systems engineering to join our dynamic AI team.

Your work will directly enable the training and deployment of robust, production‑ready multimodal systems that analyze complex data types (such as genomics, pathology images, radiology scans, and clinical notes) to improve patient care, optimize clinical workflows, and accelerate life‑saving medical research.

Key Responsibilities
  • Architect and build sophisticated data processing workflows responsible for ingesting, processing, and preparing multimodal training data that seamlessly integrate with large-scale distributed ML training frameworks and infrastructure (GPU clusters).
  • Develop strategies for efficient, compliant data ingestion from diverse sources, including internal databases, third‑party APIs, public biomedical datasets, and Tempus's proprietary data ecosystem.
  • Utilize, optimize, and contribute to frameworks specialized for large-scale ML data loading and streaming (e.g., Mosaic

    ML Streaming, Ray Data, HF Datasets).
  • Collaborate closely with infrastructure and platform teams to leverage and optimize cloud‑native services (primarily GCP) for performance, cost‑efficiency, and security.
  • Engineer efficient connectors and data loaders for accessing and processing information from diverse knowledge sources, such as knowledge graphs, internal structured databases, biomedical literature repositories (e.g., Pub Med), and curated ontologies.
  • Optimize data storage for efficient large‑scale training and knowledge access.
  • Orchestrate, monitor, and troubleshoot complex data workflows using tools like Airflow, Kubeflow Pipelines.
  • Establish robust monitoring, logging, and alerting systems for data pipeline health, data drift detection, and data quality assurance, providing feedback loops for continuous improvement.
  • Analyze and optimize data I/O performance bottlenecks considering storage systems, network bandwidth and compute resources.
  • Actively manage and seek optimizations for the costs associated with storing and processing massive datasets in the cloud.
Required Skills and Experience
  • Master’s degree in Computer Science, Artificial Intelligence, Software Engineering, or a related field.
  • Strong academic background with a focus on AI data engineering.
  • Proven track record (8+ years) in designing, building, and operating large-scale data pipelines and infrastructure in a production environment.
  • Strong experience working with massive, heterogeneous datasets (TBs+) and modern distributed data processing tools and frameworks such as Apache Spark, Ray, or Dask.
  • Hands‑on experience with tools and libraries specifically designed for large-scale ML data handling, such as Hugging Face Datasets, Mosaic

    ML Streaming, or similar frameworks (e.g., Web Dataset, Petastorm).
  • Experience with MLOps tools and platforms (e.g., MLflow, Kubeflow, Sage Maker Pipelines).
  • Understanding of the data challenges specific to training large models (Foundation Models, LLMs, Multimodal Models).
  • Proficiency in programming languages like Python and experience with modern distributed data processing tools and frameworks.
  • Leadership and collaboration: proven ability to bring thought leadership to the product and engineering teams, influencing technical direction and data strategy; experience mentoring junior engineers and collaborating effectively with cross‑functional teams.
  • Excellent communication skills, capable of explaining complex technical concepts to diverse audiences.
  • Strong bias‑to‑action and ability to thrive in a fast‑paced, dynamic research and development
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary