Security Researcher, Trusted Computing and Cryptography
Confirmed live in the last 24 hours
OpenAI
Job Description
About the Team
Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity.
The Security team protects OpenAI’s technology, people, and products. We are technical in what we build but operational in how we execute, and we support all of OpenAI’s research and product initiatives. Our tenets include: prioritizing impact, enabling researchers, preparing for future transformative technologies, and fostering a robust security culture.
Current security clearance is not mandatory, but being eligible for sponsorship is required.
About the Role
Lead an effort to map, characterize, and prioritize cross-layer vulnerabilities in advanced AI systems – spanning data pipelines, training/inference runtimes, system and supply chain components. You’ll drive offensive research, produce technical deliverables, and serve as OpenAI’s primary technical counterpart for select external partners (including potential U.S. government stakeholders).
What you’ll do:
Build an AI Stack Threat Map across the AI lifecycle, from data to deployment
Deliver deep-dive reports on vulnerabilities and mitigations for training and inference, focused on systemic, cross-layer risks.
Orchestrate inputs across research, engineering, security, and policy to produce crisp, actionable outputs.
Engage external partners as the primary technical representative; align deliverables to technical objectives and milestones.
Perform hands-on threat modeling, red-team design, and exploitation research across heterogeneous infrastructures (compilers, runtimes, and control planes.)
Translate complex technical issues for technical and executive audiences; brief on risk, impact, and mitigations.
You may thrive if you:
Have led high-stakes security research programs with external sponsors (e.g., national-security or critical-infrastructure stakeholders).
Have deep experience with cutting edge offensive-security techniques
Are fluent across AI/ML infrastructure (data, training, inference, schedulers, accelerators) and can threat-model end-to-end.
Operate independently, align diverse teams, and deliver on tight timelines.
Communicate clearly and concisely with experts and decision-makers.
Goals & impact
Provide decision-makers a common vulnerability taxonomy, early warning of systemic weaknesses, and a repeatable methodology that measurably raises the bar for adversaries.
Outcomes include: more resilient AI architectures, reduced exploit windows, and better-targeted security R&D investments across defense and public-sector stakeholders.
Key technical challenges
End-to-end coverage: Tracking threats across the AI lifecycle, including data, software, and system-level components.
Cross-disciplinary integration: Reconciling perspectives from owners of disjoint stack layers to capture composite attack paths.
Stochastic inference: Non-determinism from temperature/top-k/top-p decoding complicates reproducibility; requires seeded runs, harness control, and careful methodology to validate vulnerabilities.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many d
Similar Jobs
Johnson & Johnson
Director, Business Information Security
Verisign
Senior Manager - Information Security, Tools, and Engineering
Goosehead Insurance
Technical Product Manager
Salesforce
Incident Responder CSIRT - Multiple Levels
Salesforce
Senior Account Executive, Army
Salesforce