Back to Search
Overview
Mid-Level

Site Reliability Engineer, Frontier Systems Infrastructure

Confirmed live in the last 24 hours

OpenAI

OpenAI

San Francisco
On-site
Posted November 3, 2025

Job Description

About the Team

The Frontier Systems team at OpenAI builds, launches, and supports the largest supercomputers in the world that OpenAI uses for its most cutting edge model training.

We take data center designs, turn them into real, working systems and build any software needed for running large-scale frontier model trainings.

Our mission is to bring up, stabilize and keep these hyperscale supercomputers reliable and efficient during the training of the frontier models.

About the Role

We are looking for engineers to operate the next generation of compute clusters that power OpenAI’s frontier research.

This role blends distributed systems engineering with hands-on infrastructure work on our largest datacenters. You will scale Kubernetes clusters to massive scale, automate bare-metal bring-up, and build the software layer that hides the complexity of a magnitude of nodes across multiple data centers.

You will work at the intersection of hardware and software, where speed and reliability are critical. Expect to manage fast-moving operations, quickly diagnose and fix issues when things are on fire, and continuously raise the bar for automation and uptime.

In this role, you will:

  • Spin up and scale large Kubernetes clusters, including automation for provisioning, bootstrapping, and cluster lifecycle management

  • Build software abstractions that unify multiple clusters and present a seamless interface to training workloads

  • Own node bring-up from bare metal through firmware upgrades, ensuring fast, repeatable deployment at massive scale

  • Improve operational metrics such as reducing cluster restart times (e.g., from hours to minutes) and accelerating firmware or OS upgrade cycles

  • Integrate networking and hardware health systems to deliver end-to-end reliability across servers, switches, and data center infrastructure

  • Develop monitoring and observability systems to detect issues early and keep clusters stable under extreme load

  • Be expected to execute at the same level as a software engineer

You might thrive in this role if you:

  • Have deep experience operating or scaling Kubernetes clusters or similar container orchestration systems in high-growth or hyperscale environments

  • Bring strong programming or scripting skills (Python, Go, or similar) and familiarity with Infrastructure-as-Code tools such as Terraform or CloudFormation

  • Are comfortable with bare-metal Linux environments, GPU hardware, and large-scale networking

  • Enjoy solving fast-moving, high-impact operational problems and building automation to eliminate manual work

  • Can balance careful engineering with the urgency of keeping mission-critical systems running

Qualifications

  • Experience as an infrastructure, systems, or distributed systems engineer in large-scale or high-availability environments

  • Strong knowledge of Kubernetes internals, cluster scaling patterns, and containerized workloads

  • Proficiency in cloud infrastructure concepts (compute, networking, storage, security) and in automating cluster or data center operations

Bonus: background with GPU workloads, firmware management, or high-performance computing

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences tha

nodepythongorustawskubernetesaidataproductdesign