Back to Search
Overview
Mid-Level

Head of Trust & Safety Affairs

Confirmed live in the last 24 hours

OpenAI

OpenAI

San Francisco
Remote
Posted February 5, 2026

Job Description

About the Team

OpenAI’s mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. Achieving this mission requires not only cutting-edge research, but also thoughtful engagement with the policymakers, institutions, and communities shaping the future of AI—and clear, credible communication with the public.

The Global Affairs team leads this engagement in close partnership with teams across OpenAI. We work to ensure that diverse external perspectives inform our decisions, and that OpenAI’s approach to safety, governance, and responsibility is understood by those who rely on, regulate, and are affected by our technologies. Through this work, we help advance effective regulation, responsible industry standards, and governance frameworks that support the safe and beneficial development of AI.

This role will be based out of San Francisco or Remote

About the Role

OpenAI is seeking a Head of Trust & Safety Affairs to lead strategic narrative development and key stakeholder engagement on the issues most central to public trust in AI. This is a high-impact leadership role at the intersection of product, policy, communications, legal, security, and research.

You will oversee policy, partnership and outreach related to trust and safety topics including privacy, data security, child safety, mental health, elections, and other emerging issues. The role requires translating complex technical and policy work into clear, accurate, and credible narratives for global audiences, while advising senior leaders and guiding internal alignment during both proactive moments and fast-moving situations.

This role reports to the Vice President for Global Policy.

Responsibilities

Strategy and leadership

  • Set the stakeholder outreach and engagement strategy for trust and safety topics, aligning narratives across product, policy, legal, security, research, and communications.

  • Serve as a senior advisor to executives and cross-functional leaders on trust, safety, and integrity narratives, balancing transparency, accountability, and operational considerations.

  • Build and lead a high-performing team and/or agency partners capable of proactive communications, issues management, and rapid response.

Portfolio ownership

  • Develop narrative pillars, proof points, and content roadmaps across trust and safety issue areas.

  • Partner closely with Privacy, Security, and Safety teams to communicate user protections, controls, and governance mechanisms in ways that are accurate, usable, and trustworthy.

  • Shape OpenAI’s external approach to sensitive and high-stakes topics, including child and youth safety, mental health-related safeguards, and election integrity commitments.

Proactive outreach and thought leadership

  • Lead external outreach and narrative development for product updates, policy positions, safety improvements, research releases, and transparency initiatives related to trust and safety.

  • Contribute to the development of executive talking points, blogs and op-eds, Q&As, briefing materials, and communications for partners, NGOs, government, and industry.

  • Identify opportunities to set the public agenda by clearly explaining safety mitigations, red-teaming insights, enforcement approaches, and evolving industry standards.

Issues management and rapid response

  • Own preparedness and response for trust and safety issues, including incident communications planning, escalation protocols, stakeholder notifications, and media strategy.

  • Lead cross-functional communications during high-intensity moments, ensuring clarity, consistency, speed, and calm execution.

  • Anticipate emerging risk narratives, misinformation trends, and stakeholder concerns, and build monitoring systems and response playbooks.

Internal alignment and operating rhythms

  • Establish systems

gorustawsaidataproduct