×
Register Here to Apply for Jobs or Post Jobs. X

Hardware Development Infrastructure Engineer

Job in San Francisco, San Francisco County, California, 94199, USA
Listing for: OpenAI
Full Time position
Listed on 2026-03-01
Job specializations:
  • IT/Tech
    Systems Engineer, Cloud Computing, Data Engineer
Salary/Wage Range or Industry Benchmark: 80000 - 100000 USD Yearly USD 80000.00 100000.00 YEAR
Job Description & How to Apply Below

About the Team

OpenAI's Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co‑design hardware tightly integrated with AI models. In addition to delivering production‑grade silicon for OpenAI's supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI.

About

the Role

We're looking for a Hardware Development Infrastructure Engineer to build and run the infrastructure that powers OpenAI's hardware development lifecycle. You'll work closely with hardware teams to translate their workflows into scalable, observable, and automated systems, and then own the platforms that support them over time.

This role sits at the intersection of hardware, cloud, HPC, Dev Ops, and data. You'll design regression systems, CI/CD pipelines, cloud and cluster platforms, and the data foundations that make development efficiency visible and measurable.

In this role, you will:
  • Partner with hardware teams on workflows and tooling:
    Embed with teams across DV, PD, emulation, formal, and software to understand development flows, identify failure modes, and deliver tooling (CLIs, services, APIs) that reduces manual work and accelerates iteration.
  • Build and operate regression systems at scale:
    Own regressions end‑to‑end—from definition and scheduling to execution, results ingestion, triage, and reporting—while improving throughput, reproducibility, and flake reduction.
  • Own CI/CD for infrastructure and tooling:
    Design and operate pipelines for infrastructure‑as‑code, services, images, and cluster configuration changes, including testing, gated deploys, staged rollouts, and safe rollback.
  • Run cloud and HPC platforms:
    Design, provision, and operate cloud infrastructure (Azure preferred) and HPC/HTC clusters (e.g., Slurm), tuning scheduling policies, autoscaling, node life cycles, and cost‑performance tradeoffs.
  • Build data foundations and visibility:
    Develop ETL pipelines to ingest metrics, logs, and results; operate databases for workflow metadata and outcomes; and build dashboards that surface efficiency, utilization, and reliability trends.
  • Drive operational excellence:
    Establish monitoring and alerting, lead incident response and postmortems, maintain runbooks, and produce clear, durable documentation.
You might thrive in this role if you have:
  • Familiarity with chip development workflows and at least one deep EDA domain (e.g., DV, PD, emulation, or formal verification). Strong infrastructure fundamentals, including cloud platforms, networking, security, performance, and automation.
  • Experience operating cloud environments (Azure preferred; AWS, GCP, or OCI acceptable) with strong infrastructure‑as‑code practices (e.g., Terraform, Bicep; configuration management tools a plus). Strong programming skills (Python preferred) and solid software engineering and scripting practices.
  • Experience building and operating CI/CD systems (e.g., Jenkins, Buildkite, Git Hub Actions), including testing and release workflows.
  • Database experience (e.g., Postgres or MySQL), including schema design, migrations, indexing, and operational safety.
  • Clear communicator with strong judgment‑able to explain tradeoffs, propose pragmatic solutions, and articulate a realistic vision for scalable infrastructure.
Preferred Qualifications
  • Experience operating Slurm or other large‑scale cluster schedulers.
  • Experience with enterprise authentication and directory services (e.g., Entra , LDAP, FreeIPA, SSSD).
  • Experience building or operating backend and middleware systems such as message queues, caches, artifact stores, or internal service platforms.
  • Familiarity with high‑performance storage architectures and data movement optimization.
  • Experience running and monitoring license servers for expensive or capacity‑constrained tool chains.

To comply with U.S. export control laws and regulations, candidates for this role may need to meet certain legal status requirements as…

To View & Apply for jobs on this site that accept applications from your location or country, tap the button below to make a Search.
(If this job is in fact in your jurisdiction, then you may be using a Proxy or VPN to access this site, and to progress further, you should change your connectivity to another mobile device or PC).
 
 
 
Search for further Jobs Here:
(Try combinations for better Results! Or enter less keywords for broader Results)
Location
Increase/decrease your Search Radius (miles)

Job Posting Language
Employment Category
Education (minimum level)
Filters
Education Level
Experience Level (years)
Posted in last:
Salary