Device ML Infrastructure Engineer; ML User Experience APIs & Integration
Listed on 2026-02-28
-
IT/Tech
AI Engineer, Machine Learning/ ML Engineer, Data Scientist, Data Engineer
Summary
Imagine being at the forefront of an evolution where innovative AI meets the elegance of Apple silicon. The On-Device Machine Learning team transforms groundbreaking research into practical applications, enabling billions of Apple devices to run powerful AI models locally, privately, and efficiently. We stand at the unique intersection of research, software engineering, hardware engineering, and product development, making Apple a top destination for machine learning innovation.
We are building the first end-to-end developer experience for ML development that, by taking advantage of Apple’s vertical integration, allows developers to iterate on model authoring, optimization, transformation, execution, debugging, profiling and analysis. This team builds the essential infrastructure that enables machine learning at scale on Apple devices. This involves onboarding modern architectures to embedded systems, developing optimization toolkits for model compression and acceleration, building ML compilers and runtimes for efficient execution, and creating comprehensive benchmarking and debugging tool chains.
This infrastructure forms the backbone of Apple’s machine learning workflows across Camera, Siri, Health, Vision, and other core experiences, supplying to the overall Apple Intelligence ecosystem.
If you are passionate about the technical challenges of running sophisticated ML models across all devices, from resource‑constrained devices to powerful clusters, and eager to directly impact how machine learning operates across the Apple ecosystem, this role presents a great opportunity to work on the next generation of intelligent experiences on Apple platforms.
DescriptionOur group is seeking an ML Infrastructure Engineer with a focus on ML user experience APIs and integration. The role is responsible for developing new ML model conversion and authoring APIs that serve as the main entry point into Apple’s ML infrastructure. As an engineer in this role, you will be primarily focused on developing and using APIs that enable ML engineers to efficiently author and convert ML models to run effectively on Apple platforms.
You will integrate Apple’s ML tools/APIs into internal and external model repositories to evaluate and demonstrate how models can be efficiently ingested and implemented within Apple’s ML stack, ideate, design, and stress test a variety of optimizations required to support these models, ranging from source-level optimizations (e.g., in the PyTorch program) to custom transformations within Apple’s model representation. As a power user of Apple’s ML infrastructure, you will also help create the latest and most capable models with strong, driven performance across hardware targets—showcasing the practical power of Apple’s authoring and runtime APIs.
This role offers the opportunity to shape how ML developers experience Apple’s end-to-end inference stack, from model creation to deployment. The role requires a confirmed understanding of ML modeling (architectures, training vs. inference trade-offs, etc.), ML deployment optimizations (e.g., quantization), and strong experience designing Python APIs.
- Develop APIs in Apple’s ML stack for ML engineers to efficiently import and implement their models.
- Integrate Apple’s ML tools into internal and external model repositories to demonstrate and stress-test model ingestion with peak efficiency and performance.
- Develop optimizations across the pipeline, including source-level transformations, and custom operations to improve inference efficiency.
- Onboard the latest ML models with peak performance, and use these examples to highlight and validate the authoring and runtime capabilities of Apple’s inference stack.
- Bachelor’s in Computer Sciences, Engineering, or related subject area.
- Highly proficient in Python programming, familiarity with C++ is required.
- Proficiency in at least one ML authoring framework, such as PyTorch, MLX, and JAX.
- Strong understanding of ML fundamentals, including common architectures such as Transformers.
- Hands‑on experience with ML inference optimizations, such as quantization, pruning, KV…
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).