[Expression of Interest] Research Scientist / Engineer, Alignment Finetuning
Confirmed live in the last 24 hours
Anthropic
Compensation
$350,000 - $500,000/year
Job Description
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the role:
As a Research Scientist/Engineer on the Alignment Finetuning team at Anthropic, you'll lead the development and implementation of techniques aimed at training language models that are more aligned with human values: that demonstrate better moral reasoning, improved honesty, and good character. You'll work to develop novel finetuning techniques and to use these to demonstrably improve model behavior.
Note: For this role, we conduct all interviews in Python. We have filled our headcount for 2025. However, we are leaving this form open as an expression of interest since we expect to be growing the team in the future, and we will review your application when we do. As such, you may not hear back on your application to this team until the new year
Responsibilities:
- Develop and implement novel finetuning techniques using synthetic data generation and advanced training pipelines
- Use these to train models to have better alignment properties including honesty, character, and harmlessness
- Create and maintain evaluation frameworks to measure alignment properties in models
- Collaborate across teams to integrate alignment improvements into production models
- Develop processes to help automate and scale the work of the team
You may be a good fit if you:
- Have an MS/PhD in Computer Science, ML, or related field, or equivalent experience
- Possess strong programming skills, especially in Python
- Have experience with ML model training and experimentation
- Have a track record of implementing ML research
- Demonstrate strong analytical skills for interpreting experimental results
- Have experience with ML metrics and evaluation frameworks
- Excel at turning research ideas into working code
- Can identify and resolve practical implementation challenges
Strong candidates may also have:
- Experience with language model finetuning
- Background in AI alignment research
- Published work in ML or alignment
- Experience with synthetic data generation
- Familiarity with techniques like RLHF, constitutional AI, and reward modeling
- Track record of designing and implementing novel training approaches
- Experience with model behavior evaluation and improvement
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage yo
Similar Jobs
Anthropic
[Expression of Interest] Research Scientist / Engineer, Honesty
Anthropic
[Expression of Interest] Research Engineer, Production Model Post-Training - London
Anthropic