More jobs:
Software Engineer, Applied Training
Job in
Sunnyvale, Santa Clara County, California, 94085, USA
Listed on 2026-02-17
Listing for:
Core Weave
Apprenticeship/Internship
position Listed on 2026-02-17
Job specializations:
-
IT/Tech
Systems Engineer, AI Engineer
Job Description & How to Apply Below
Core Weave is The Essential Cloud for AI. Built for pioneers by pioneers, Core Weave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, Core Weave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability.
Founded in 2017, Core Weave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at
What You'll Do:
A lab signs with Core Weave. They have thousands of GPUs waiting. Their first month? Spent on cluster setup instead of research. Building deployment scripts, fighting container images, wiring up job orchestration. They came to train models. Instead, they're doing operations.
This is the problem. We're building the Applied Training team to fix it.
You'll be an early member of a small team, responsible for our Kubernetes-native research cluster platform, or the sandbox client for agentic training and evaluation, or possibly a new project altogether. The goal is specific:
Give every Core Weave customer the research infrastructure that currently only exists inside frontier labs.
About the role:
* Contribute to the roadmap for Applied Training. Figure out what actually unlocks new workloads and what's just nice to have. Work directly and closely with customers, and other teams inside of Core Weave that are building cloud native primitives. Compute, storage, networking, etc.
* For the research cluster platform: design and build a complete research cluster experience. CLI, job configuration schema, Kubernetes operators, daemons. Solve the problems researchers actually hit: code distribution, checkpoint-triggered evaluation, cross-cluster scheduling, programmatic job control. Replace the patchwork of scripts customers keep building on their own.
* For sandbox infrastructure: own the Python SDK and work in a very tight loop with the backend team. When an RL training run needs to spawn thousands of isolated containers for agent rollouts, that's this system. When someone wants to run agent benchmarks at scale, that's this system. Make it work with our Kubernetes clusters, storage, and auth so researchers don't have to think about infrastructure.
* Write the documentation for running popular OSS training frameworks on Core Weave. The work that can unblock customers and help them succeed.
* Work with infrastructure teams and customers directly. The customers are large AI labs running thousands of GPUs. Understand how they structure their internal supercomputing stacks. Bring that knowledge back to what we build.
Who You Are:
* 8-12+ years building distributed systems, ML infrastructure, or developer platforms.
* Real Kubernetes experience: custom controllers, operators, scheduling, CRDs, workload orchestration just deploying things to Kubernetes or cluster administration.
* Excited about rigorous engineering, but enabled by AI based workflows.
* You understand what makes researchers productive. Code distribution matters. Fast iteration cycles matter. Workflows that don't require becoming infrastructure experts matter.
* Familiarity with training: how distributed jobs get scheduled, how ranks initialize, what breaks at scale.
* You've shipped infrastructure that other people rely on daily. Not prototypes. Production systems.
* Good communicator. Can work with customers, translate researcher complaints into system designs.
Preferred:
* Experience building internal ML platforms or research clusters at a company doing large-scale training.
* Familiarity with agentic AI: RL training with rollouts, agent evaluation, sandbox isolation for running untrusted code.
* Background with Slurm, Ray, or similar workload orchestration. Opinions on where they fall short.
* Experience with container runtimes, isolation (gVisor, Kata), or serverless platforms.
* OSS contributions to Kubernetes SIGs, Ray, PyTorch, or similar.
Wondering if you're a good fit? We believe in investing in our people and value candidates who can bring their diverse experiences to our teams - even if you aren't a 100% skill or experience match.
Why…
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×