×
Register Here to Apply for Jobs or Post Jobs. X

Research-Hardware Codesign Engineer

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: OpenAI
Part Time position
Listed on 2026-01-15
Job specializations:
  • IT/Tech
    AI Engineer, Systems Engineer, Hardware Engineer
Salary/Wage Range or Industry Benchmark: 250000 USD Yearly USD 250000.00 YEAR
Job Description & How to Apply Below

Research-Hardware Codesign Engineer | OpenAI

Careers

Research-Hardware Codesign Engineer

Hardware - San Francisco

Apply now (opens in a new window)

About the Team

OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.

About the Role

We’re seeking a Research-Hardware Codesign Engineer to operate at the boundary between model research and silicon/system architecture. You’ll help shape the numerics, architecture, and technology bets of future OpenAI silicon in collaboration with both Research and Hardware.

Your work will include debugging gaps between rooflines and reality, writing quantization kernels, derisking numerics via model evals, quantifying system architecture tradeoffs, and implementing novel numeric RTL. This is a hands‑on role for people who go looking for hard problems, get to ground truth, and drive it to production. Strong prioritization and clear, honest communication are essential.

Location:

San Francisco, CA (Hybrid: 3 days/week onsite)
Relocation assistance available.

In this role:
  • Build on our roofline simulator to track evolving workloads, and deliver analyses that quantify the impact of system architecture decisions and support technology pathfinding.
  • Debug gaps between performance simulation and real measurements; clearly communicate root cause, bottlenecks, and invalid assumptions.
  • Write emulation kernels for low‑precision numerics and lossy compression schemes, and get Research the information they need to trade efficiency with model quality.
  • Prototype numerics modules by pushing RTL through synthesis; hand off novel numerics cleanly, or occasionally own an RTL module end‑to‑end.
  • Proactively pull in new ML workloads, prototype them with rooflines and/or functional simulation, and drive initial evaluation of new opportunities or risks.
  • Understand the whole picture from ML science to hardware optimization, and slice this end‑to‑end objective into near‑term deliverables.
  • Build ad‑hoc collaborations across teams with very different goals and areas of expertise, and keep progress unblocked.
  • Communicate design tradeoffs clearly with explicit assumptions and confidence levels; produce a trail of evidence that enables confident execution.
You Will Thrive in this Role if:
  • An exceptional track record of high‑quality technical output, and a bias for shipping a prototype now and iterating later in the absence of clear requirements.
  • Strong Python, and C++ or Rust, with a cautious attitude toward correctness and an intuition for clean extensibility.
  • Experience writing Triton, CUDA, or similar, and an understanding of the resulting mapping of tensor ops to functional units.
  • Working knowledge of PyTorch or JAX; experience in large ML codebases is a plus.
  • Practical understanding of floating point numerics, the ML tradeoffs of reduced precision, and the current state of the art in model quantization.
  • Deep understanding of transformer models, and strong intuition for transformer rooflines and the tradeoffs of sharded training and inference in large‑scale ML systems.
  • Experience writing RTL (especially for floating point logic) and understanding of PPA tradeoffs is a plus.
  • Strong cross‑functional communication (e.g. across ML researchers and hardware engineers); ability to slice ambiguous early‑incubation ideas into concrete arenas in which progress can be made.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general‑purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary