×
Register Here to Apply for Jobs or Post Jobs. X

Software Engineer II - AI​/ML, AWS Neuron, LLM Inference, AI​/ML, AWS Neuron, Model Inference

Job in Cupertino, Santa Clara County, California, 95014, USA
Listing for: Amazon
Full Time position
Listed on 2026-02-28
Job specializations:
  • Software Development
    AI Engineer, Machine Learning/ ML Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

Amazon never asks for fees or deposits in any form during the recruitment process. Please  to learn more and safeguard yourself from potential frauds.

Job  |  Services LLC

About the Team

The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium. The Inference Enablement and Acceleration team works across multiple technology layers—frameworks, kernels, compiler, runtime, and collectives—to optimize performance and help customers run models on our custom ML accelerators.

Key Job Responsibilities
  • Design, develop, and optimize machine learning models and frameworks for deployment on custom ML hardware accelerators.
  • Participate in all stages of the ML system development lifecycle, including distributed computing architecture design, implementation, performance profiling, hardware‑specific optimizations, testing, and production deployment.
  • Build infrastructure to systematically analyze and onboard multiple models with diverse architectures.
  • Design and implement high‑performance kernels and features for ML operations, leveraging the Neuron architecture and programming models.
  • Analyze and optimize system‑level performance across multiple generations of Neuron hardware.
  • Conduct detailed performance analysis using profiling tools to identify and resolve bottlenecks.
  • Implement optimizations such as fusion, sharding, tiling, and scheduling.
  • Conduct comprehensive testing, including unit and end‑to‑end model testing with continuous deployment and release pipelines.
  • Work directly with customers to enable and optimize their ML models on AWS accelerators.
  • Collaborate across teams to develop innovative optimization techniques.
A Day in the Life

You will collaborate with a cross‑functional team of applied scientists, system engineers, and product managers to deliver state‑of‑the‑art inference capabilities for generative AI applications. Your work will involve debugging performance issues, optimizing memory usage, and shaping the future of Neuron’s inference stack across Amazon and the open‑source community.

About the Team

The Inference Enablement and Acceleration team fosters a builder’s culture where experimentation is encouraged, impact is measurable, and collaboration, ownership, and continuous learning are valued. We mentor new members and encourage knowledge sharing to help team members grow their engineering expertise.

Basic Qualifications
  • Bachelor’s degree in computer science or equivalent.
  • 5+ years of professional software development experience.
  • 5+ years of design or architecture experience for new and existing systems.
  • Fundamentals of machine learning and LLMs, including architecture, training, and inference life cycles, and experience optimizing model execution.
  • Software development experience in C++ and Python.
  • Strong understanding of system performance, memory management, and parallel computing principles.
  • Proficiency in debugging, profiling, and implementing best software engineering practices in large‑scale systems.
Preferred Qualifications
  • Familiarity with PyTorch, JIT compilation, and AOT tracing.
  • Familiarity with CUDA kernels or equivalent ML or low‑level kernels.
  • Experience with performant kernel development such as CUTLASS or Flash Infer.
  • Knowledge of syntax and tile‑level semantics similar to Triton.
  • Experience with online/offline inference serving with vLLM, SGLang, Tensor

    RT, or similar platforms in production environments.
  • Deep understanding of computer architecture, operating‑system level software, and parallel computing.
Equal Opportunity Employer

Amazon is an equal‑opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.

Los Angeles County Applicants

Job duties for this position include working safely and cooperatively with other employees, supervisors, and staff; adhering to standards of excellence despite stressful conditions; communicating effectively and respectfully; and following all federal, state, and local laws and company policies. Criminal history may have a…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary