Back to Search
Overview
Mid-Level

Researcher, Automated Red Teaming

Confirmed live in the last 24 hours

OpenAI

OpenAI

San Francisco
On-site
Posted February 23, 2026

Job Description

About the team

Our Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings.

Preparedness is a team within Safety Systems and produces works like OpenAI’s Preparedness Framework. Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society.

Frontier AI models have the potential to benefit all of humanity, but they also pose increasingly severe risks. To help ensure AI drives positive change, the Preparedness team helps OpenAI prepare for the development of increasingly capable frontier AI models. This team is responsible for identifying, tracking, and mitigating catastrophic risks related to frontier AI systems.

The mission of the Preparedness team is to:

  1. Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophic.

  2. Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems.

About the role

This role leads the Automated Red Teaming (ART) effort: building scalable, research-driven systems that continuously uncover failure modes in our models and safeguards, and translate those findings into actionable, production-facing improvements. The goal is to reduce expected harm by finding the highest-leverage, least-covered weaknesses early and reliably.

Your Responsibilities:

  • Own the research and technical direction for automated red teaming across catastrophic risk areas, with an initial emphasis on:

    • Automated classifier jailbreak discovery (cyber and bio).

    • Automated bio threat-development elicitation (worst-feasible planning uplift).

    • CoT monitoring evasion probing (and adjacent loss-of-control evaluations).

  • Partner closely with:

    • Vertical risk teams (Cyber, Bio, Loss of Control) to define threat models, prioritize targets, and land mitigations.

    • The Classifiers team to turn discovered attacks into training data, evals, and measurable robustness gains.

    • Product / Engineering / Safety stakeholders to ensure ART outputs are operationally useful.

You'll enjoy this work if you:

  • Feel a strong pull toward AI safety, and you’re motivated by reducing real-world catastrophic risk (not just publishing cool results).

  • Love breaking systems (responsibly) — you get energy from finding weird, high-severity failure modes and turning them into concrete fixes.

  • Have strong applied research instincts, especially around evaluations: you’re good at designing experiments that are reproducible, interpretable, and hard to fool.

  • Bring hands-on experience with LLMs and agents, including multi-turn behaviors, tool use, and the ways models adapt to constraints.

  • Are comfortable building scalable automation, not just prototypes — you can turn red-teaming ideas into pipelines that run continuously and produce high-signal outputs.

  • Have solid software engineering fundamentals (data structures, algorithms, testing discipline) and you can work effectively in a production-adjacent environment.

  • Think in threat models and i

gorustawsaidataproductdesign