Back to Search
Overview
Mid-Level

Prompt Engineer, Claude Code

Confirmed live in the last 24 hours

Anthropic

Anthropic

San Francisco, CA | New York City, NY | Seattle, WA
Hybrid
Posted April 1, 2026

Job Description

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

About the Role

 

As a Prompt Engineer on the Claude Code team, you'll own Claude's behaviors specifically within Claude Code — ensuring users get a consistent, safe, and high-quality experience as we ship new models and evolve the product. This is a highly specialized role sitting at the intersection of model behavior and product quality.

You'll be the expert on how Claude behaves inside Claude Code, owning and maintaining the system prompts that ship with each new model snapshot. When a new model drops, you're the person making sure Claude Code feels right within days — not weeks. You'll work closely with Model Quality and Research to understand emergent behaviors and behavioral regressions, and with product and safeguards teams to respond quickly when something goes wrong.

This role requires someone who can move fast on behavioral tuning while maintaining rigor, and who cares deeply about the end-to-end developer experience Claude Code delivers. You'll need strong prompting skills, excellent judgment about model behaviors, and the collaborative skills to work across product, safeguards, and research teams.

Salary: $320,000-405,000 (SWE-G 5-6)

Responsibilities

  • Own Claude Code's system prompts for each new model snapshot, ensuring behaviors feel consistent and well-tuned

  • Review production prompt changes and serve as a resource for particularly challenging prompting problems involving alignment and reputational risks

  • Lead incident response for behavioral and policy concerns, coordinating with product and safeguards teams

  • Scale prompting and evaluation best practices across claude code and product teams.

  • Deliver product evaluations focused on model behaviors 

  • Define and streamline processes for rolling out prompt changes, including launch criteria and review practices

  • Create model-specific prompt guides that document quirks and optimal prompting strategies for each release

  • Collaborate with product teams to translate feature requirements into effective prompts

You May Be a Good Fit If You

  • Are a power user of agentic coding tools and have strong intuition about model capabilities and limitations

  • Thrive in high-intensity environments with fast iteration cycles

  • Take full ownership of problems and drive them to completion independently

  • Are skilled at creating and maintaining behavioral evaluations

  • Have strong technical understanding, including comprehension of agent scaffold architectures and model training processes

  • Are an experienced coder comfortable working in Python and Typescript

  • Have independently driven changes through production systems with strong execution and responsiveness

  • Have experience translating user feedback and product needs into coherent prompts and behavioral specifications

  • Excel at working across organizational boundaries, collaborating effectively with teams that have differing goals and perspectives

  • Have experience translating user feedback and behavioral observations into coherent prompt changes and specifications

  • Care deeply about AI safety and making Claude a healthy alternative in the AI landscape

 

The annual compensation range for this role is lis

pythontypescriptgorustawsaidataproductsales