Principal Architect, AI Networking
Listed on 2025-12-02
-
IT/Tech
AI Engineer
Overview
Join to apply for the Principal Architect, AI Networking role DIA has been redefining computer graphics, PC gaming, and accelerated computing for more than 25 years. We are tapping into the unlimited potential of AI to define the next era of computing. As an NVIDIA team member, you’ll be part of a diverse, supportive environment where you can make a lasting impact on the world.
As a Software Architect for AI Networking you will be part of a groundbreaking team driving technological innovation. This is an outstanding opportunity to craft the future of AI networking using cutting‑edge technologies to address sophisticated challenges. We work in a dynamic environment where your contributions will influence AI infrastructure.
Responsibilities- Work on accelerating NVIDIA Dynamo - KV cache management and large‑scale inference.
- Develop and research groundbreaking networking technologies to advance and scale AI networks.
- Co‑design software and hardware networking solutions across various networking domains, from network transports to AI frameworks.
- Collaborate with NVIDIA's hardware architecture, software architecture, and research teams to build innovative networking hardware and software solutions.
- Lead the development of prototypes that optimize AI training and inference infrastructure.
- Master’s or Ph.D. in Computer Science, Electrical or Computer Engineering from a top‑tier university (or related field) or equivalent experience.
- 12+ years of relevant academic or proven experience in the field.
- Comprehensive understanding of AI workloads (inference primarily, plus training) and their impact on network infrastructure.
- Strong proficiency in Machine Learning/Deep Learning fundamentals, inference runtimes, and Deep Learning frameworks.
- Skilled in C or C++ for systems software development; familiarity with Rust is helpful.
- Curiosity for building leading edge technology.
- Ability to work and communicate effectively across diverse teams with varying expertise and time zones.
- Proven research track record.
- Experience in LLM inference, AI networking and storage needs.
- Background in storage and storage optimization: file systems, object stores, caches, coherency.
- Stellar communication skills.
Your base salary will be determined based on location, experience, and peers in similar positions. The base salary range is 272,000 USD – 425,500 USD. You will also be eligible for equity and benefits. For details on benefits, visit
Applications for this job will be accepted at least until August 22, 2025.
LocationBeaverton, OR;
Hillsboro, OR; other Oregon locations may apply.
NVIDIA is committed to fostering a diverse work environment and is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
#J-18808-Ljbffr(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).