Research Engineer, Interpretability
Confirmed live in the last 24 hours
Anthropic
Compensation
$315,000 - $560,000/year
Job Description
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
When you see what modern language models are capable of, do you wonder, "How do these things work? How can we trust them?"
The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe.
Think of us as doing "neuroscience" of neural networks using "microscopes" we build - or reverse-engineering neural networks like binary programs.
More resources to learn about our work:
- Our research blog - covering advances including Monosemantic Features and Circuits
- An Introduction to Interpretability from our research lead, Chris Olah
- The Urgency of Interpretability from CEO Dario Amodei
- Engineering Challenges Scaling Interpretability - directly relevant to this role
- 60 Minutes segment - Around 8:07, see a demo of tooling our team built
- New Yorker article - what it's like to work on one of AI's hardest open problems
Even if you haven’t worked on interpretability before, the infrastructure expertise is similar to what's needed across the lifecycle of a production language model:
- Pretraining: Training dictionary learning models looks a lot like model pretraining - creating stable, performant training jobs for massively parameterized models across thousands of chips
- Inference: Interp runs a customized inference stack. Day-to-day analysis requires services that allow editing a model's internal activations mid-forward-pass - for example, adding a "steering vector"
- Performance: Like all LLM work, we push up against the limits of hardware and software. Rather than squeezing the last 0.1%, we are focused on finding bottlenecks, fixing them and moving ahead given rapidly evolving research and safety mission
The science keeps scaling - and it's now applied directly in safety audits on frontier models, with real deadlines. As our research has matured, engineering and infrastructure have become a bottleneck. Your work will have a direct impact on one of the most important open problems in AI.
Responsibilities:
- Build and maintain the specialized inference and training infrastructure that powers interpretability research - including instrumented forward/backward passes, activation extraction, and steering vector application
- Resolve scaling and efficiency bottlenecks through profiling, optimization, and close collaboration with peer infrastructure teams
- Design tools, abstractions, and platforms that enable researchers to rapidly experiment without hitting engineering barriers
- Help bring interpretability research into production safety audits - with real deadlines and high reliability expectations
- Work across the stack - from model internals and accelerator-level optimization to user-facing research tooling
You may be a good fit if you:
- Have 5-10+ years of experience building software
- Are highly proficient in at least one programming language (e.g., Python, R
Similar Jobs
Coca-Cola
Assistant Scientist, Lab Coordination, GDI
Coca-Cola
Senior Manager, Microbiology, GDI
Coca-Cola
Scientist, Product, GDI
Catalent
Scientist I, Molecular & Cellular Biology
S&P Global
Research Analyst
Cardinal Health