×
Register Here to Apply for Jobs or Post Jobs. X

Senior Compute Platform Engineer

Job in Collegeville, Montgomery County, Pennsylvania, 19426, USA
Listing for: GSK
Full Time position
Listed on 2026-03-01
Job specializations:
  • Software Development
    Software Engineer, Cloud Engineer - Software, Full Stack Developer, AI Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Site Name: USA - Pennsylvania - Upper Providence, Cambridge 300 Technology Square, Seattle Sixth Ave, South San Francisco 611 Gateway Blvd

Posted Date: Feb 18 2026

Overview

At GSK, we want to supercharge our data capability to better understand our patients and accelerate our ability to discover vaccines and medicines. The Onyx Research Data Platform organization represents a major investment by GSK R&D and Digital & Tech, designed to deliver a step‑change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full‑stack shop consisting of product and portfolio leadership, data engineering, infrastructure and Dev Ops, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward building a next‑generation, metadata‑ and automation‑driven data experience for GSK’s scientists, engineers, and decision‑makers, providing best‑in‑class AI/ML and data analysis environments, and aggressively engineering our data at scale as one unified asset to unlock its value in real‑time.

The Compute Platform Engineering team is building a first‑in‑class platform of tool chains and workflows that accelerate application development, scale up computational experiments, and integrate all computation with project metadata, logs, experiment configuration and performance tracking over abstractions that encompass Cloud and High‑Performance Computing. This metadata‑forward, CI/CD‑driven platform represents and enables the entire application and analysis lifecycle including interactive development and explorations (notebooks), large‑scale batch processing, observability and production application deployments.

Key Responsibilities
  • Designs, builds, and operates tools, services, workflows, etc. that deliver high value through the solution to key business problems.
  • Responsible for development of key components of a hybrid on‑prem/cloud compute platform for both interactive and scalable batch computing and establishing of processes and workflows to transition existing HPC users and teams to this platform.
  • Responsible for code‑driven environment, applications, and container/image builds as well as CI/CD‑driven application deployments.
  • Consult science users on application scalability to PBs of data by having a deep understanding of software engineering, algorithms, and underlying hardware infrastructure and their impact on performance.
  • Confidently optimizes design and execution of complex solutions within large‑scale distributed computing environments.
  • Produces well‑engineered software, including appropriate automated test suites, technical documentation, and operational strategy.
  • Ensures consistent application of platform abstractions to maintain quality and consistency with respect to logging and lineage.
  • Fully versed in coding best practices and ways of working, and participates in code reviews and partnering to improve the team’s standards.
  • Adheres to QMS framework and CI/CD best practices and helps guide improvements that improve ways of working.
  • Provides leadership to team members to help others get the job done right.
Basic Qualifications
  • Bachelor’s degree in data engineering, Computer Science, Software Engineering or related discipline.
  • 6+ years of professional experience.
  • Experience with Python.
  • Experience with Cloud.
  • Experience with High Performance Compute (HPC).
Preferred Qualifications
  • Deep knowledge and use of at least one common programming language: e.g., Python, C++, Java, including tool chains for documentation, testing, and operations / observability.
  • Deep expertise in modern software development tools / ways of working (e.g. git/Git Hub, devops tools, metrics / monitoring, …).
  • Deep cloud expertise (e.g., AWS, Google Cloud, Azure), including infrastructure‑as‑code tools and scalable compute technologies, such as Google Batch and Vertex.
  • Experience with CI/CD implementations using git and a common CI/CD stack (e.g., Azure Dev Ops, Cloud Build, Jenkins, Circle

    CI, Git Lab).
  • Deep expertise with Docker, Kubernetes, and the larger CNCF ecosystem including experience with application deployment tools such as Helm.
  • Experience with low‑level application build tools (make, CMake)…
Position Requirements
10+ Years work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary