Senior Deep Learning Software Engineer, PyTorch - TensorRT
Job in
Santa Clara, Santa Clara County, California, 95053, USA
Listed on 2025-12-13
Listing for:
NVIDIA Corporation
Full Time
position Listed on 2025-12-13
Job specializations:
-
Software Development
AI Engineer, Machine Learning/ ML Engineer, Software Engineer, Data Scientist
Job Description & How to Apply Below
Senior Deep Learning Software Engineer, PyTorch - Tensor
RT Performance page is loaded## Senior Deep Learning Software Engineer, PyTorch - Tensor
RT Performance locations:
US, CA, Santa Clara:
US, CA, Remote time type:
Full time posted on:
Posted Todayjob requisition :
JR2009866
We are now looking for a Senior Deep Learning Software Engineer, PyTorch-Tensor
RT Performance! NVIDIA is seeking an experienced Deep Learning Engineer passionate about analyzing and improving the performance of Torch inference with Tensor
RT! NVIDIA is rapidly growing our research and development for Deep Learning Inference and is seeking excellent Software Engineers at all levels of expertise to join our team. Companies around the world are using NVIDIA GPUs to power a revolution in deep learning, enabling breakthroughs in areas like Generative AI, Recommenders and Vision that have put DL into every software solution.
Join the team that builds the software to enable the performance optimization, deployment and serving of these DL solutions. We specialize in developing GPU-accelerated Deep learning software like Tensor
RT, DL benchmarking software and performant solutions to deploy and serve these models.
Collaborate with the deep learning community to integrate Tensor
RT to PyTorch. Identify performance opportunities and optimize SoTA models across the spectrum of NVIDIA accelerators, from datacenter GPUs to edge SoCs. Implement graph compiler algorithms, frontend operators and code generators across the PyTorch, Torch-Tensor
RT, Tensor
RT software stack. Work and collaborate with a diverse set of teams involving workflow improvements, performance modeling, performance analysis, kernel development and inference software development.
** What you'll be doing:
*** Analyze performance issues and identify performance optimization opportunities inside Torch-Tensor
RT/Tensor
RT.
* Contribute features and code to NVIDIA/OSS inference frameworks including but not limited to Torch-Tensor
RT/Tensor
RT/PyTorch.
* Work with cross-collaborative teams inside and outside of NVIDIA across generative AI, automotive, robotics, image understanding, and speech understanding to develop innovative inference solutions.
* Scale performance of deep learning models across different architectures and types of NVIDIA accelerators.
** What we need to see:
*** Bachelors, Masters, PhD, or equivalent experience in relevant fields (Computer Science, Computer Engineering, EECS, AI).
* At least 4 years of relevant software development experience.
* Excellent Python/C++ programming, software design and software engineering skills
* Experience with a DL framework like PyTorch, JAX, Tensor Flow.
* Experience with performance analysis and performance optimization
** Ways to stand out from the crowd:
*** Architectural knowledge of GPU.
* Prior experience with a AoT or JiT compiler in deep learning inference, e.g. Torch Dynamo/Torch Inductor.
* Prior experience with performance modeling, profiling, debug, and code optimization of a DL/HPC/high-performance application.
* GPU programming experience and proficiency in one of the GPU programming domain specific languages, e.g. CUDA/TileIR/CuTeDSL/cutlass/Triton.
GPU deep learning has provided the foundation for machines to learn, perceive, reason and solve problems posed using human language. The GPU started out as the engine for simulating human imagination, conjuring up the amazing virtual worlds of video games and Hollywood films. Now, NVIDIA's GPU runs deep learning algorithms, simulating human intelligence, and acts as the brain of computers, robots and self-driving cars that can perceive and understand the world.
Just as human imagination and intelligence are linked, computer graphics and artificial intelligence come together in our architecture. Two modes of the human brain, two modes of the GPU. This may explain why NVIDIA GPUs are used broadly for deep learning, and NVIDIA is increasingly known as “the AI computing company.” Come, join our DL Architecture team, where you can help build the real-time, cost-effective computing platform driving our success in this exciting and quickly growing field.#LI-Hybrid
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.You will also be eligible for equity and .Applications for this job will be accepted at least until December 15, 2025.NVIDIA
is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
#J-18808-Ljbffr
Position Requirements
10+ Years
work experience
To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
Search for further Jobs Here:
×