Anthropic AI Safety Fellow
Confirmed live in the last 24 hours
Anthropic
Job Description
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link. We’re accepting applications on a rolling basis for cohorts starting in July 2026 and beyond. Applications for the May 2026 cohort are now closed.
Anthropic Fellows Program Overview
The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent. We provide funding and mentorship to promising technical talent - regardless of previous experience - to research the frontier of AI safety for four months.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In our previous cohorts, over 80% of fellows produced papers (more below).
We run multiple cohorts of Fellows each year. This application is for cohorts starting in July 2026 and beyond.
What to Expect
- Direct mentorship from Anthropic researchers
- Access to a shared workspace (in either Berkeley, California or London, UK)
- Connection to the broader AI safety research community
- Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD & access to benefits (benefits vary by country)
- Funding for compute (~$15k/month) and other research expenses
Mentors, Research Areas, & Past Projects
Fellows will undergo a project selection & mentor matching process. Potential mentors amongst others include:
- Jan Leike
- Sam Bowman
- Sara Price
- Alex Tamkin
- Nina Panickssery
- Trenton Bricken
- Logan Graham
- Jascha Sohl-Dickstein
- Nicholas Carlini
- Joe Benton
- Collin Burns
- Fabien Roger
- Samuel Marks
- Kyle Fish
- Ethan Perez
Our mentors will lead projects in select AI safety research areas, such as:
- Scalable Oversight: Developing techniques to keep highly capable models helpful and honest, even as they surpass human-level intelligence in various domains.
- Adversarial Robustness and AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
- Model Organisms: Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
- Model Internals / Mechanistic Interpretability: Advancing our understanding of the internal workings of large language models to enable more targeted interventions and safety measures.
- AI Welfare: Improving our understanding of potential AI welfare and developing related evaluations and mitigations.
On our Alignment Science and Frontier Red Team blogs, you can read about past projects, including:
- AI agents find $4.6M in blockchain smart contract exploits: Winnie Xiao and Cole Killian, mentored by Nicholas Carlini and Alwin Peng
- Subliminal Learning: Language Models Transmit Behavioral Tra
Similar Jobs
Northern Trust
Sr Lead Data Engineer – Data & Agentic AI
Catalent
Inventory Coordinator
F5 Networks
Solutions Engineer - AI Security - Fin Services
State of North Carolina
Customer Service Support Agent
Danaher
Field Application Specialist - Cell Therapy & Nanomedicine, ANZ
UPS