Anthropic AI Security Fellow
Confirmed live in the last 24 hours
Anthropic
Job Description
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Apply using this link. We’re accepting applications on a rolling basis for cohorts starting in July 2026 and beyond. Applications for the May 2026 cohort are now closed.
AI Security at Anthropic
We believe we are at an inflection point for AI’s impact on cybersecurity. Models are now useful for cybersecurity tasks in practice: for example, Claude can now outperform human teams in some cybersecurity competitions and help us discover vulnerabilities in our own code.
We are looking for researchers and engineers to help us accelerate defensive use of AI to secure code and infrastructure.
Anthropic Fellows Program Overview
The Anthropic Fellows Program is designed to accelerate AI security and safety research, and foster research talent. We provide funding and mentorship to promising technical talent - regardless of previous experience - to research the frontier of AI security and safety for four months.
Fellows will primarily use external infrastructure (e.g. open-source models, public APIs) to work on an empirical project aligned with our research priorities, with the goal of producing a public output (e.g. a paper submission). In our previous cohorts, over 80% of fellows produced papers (more below).
We run multiple cohorts of Fellows each year. This application is for cohorts starting in July 2026 and beyond.
What to Expect
- Direct mentorship from Anthropic researchers
- Access to a shared workspace (in either Berkeley, California or London, UK)
- Connection to the broader AI safety research community
- Weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD & access to benefits (benefits vary by country)
- Funding for compute (~$15k/month) and other research expenses
Mentors, Research Areas, & Past Projects
Fellows will undergo a project selection & mentor matching process. Potential mentors include:
- Nicholas Carlini
- Keri Warr
- Evyatar Ben Asher
- Keane Lucas
- Newton Cheng
On our Alignment Science and Frontier Red Team blogs, you can read about some past Fellows projects, including:
- AI agents find $4.6M in blockchain smart contract exploits: Winnie Xiao and Cole Killian, mentored by Nicholas Carlini and Alwin Peng
- Strengthening Red Teams: A Modular Scaffold for Control Evaluations: Chloe Loughridge et al., mentored by Jon Kutasov and Joe Benton
You may be a good fit if you
- Are motivated by reducing catastrophic risks from advanced AI systems
- Are excited to transition into full-time empirical AI safety research and would be interested in a full-time role at Anthropic
Please note: We do not guarantee that we will make any full-ti
Similar Jobs
Anthropic
Security Labs Engineer
Anthropic
IT Systems Engineer, Corporate Systems & Infrastructure
Anthropic
IT Systems Engineer, Enterprise SaaS
Anthropic
IT Systems Engineer, Corporate Systems & Infrastructure — Senior/Staff
Anthropic
Safeguards Analyst, Human Exploitation & Abuse
Anthropic