Privacy Research Engineer, Safeguards
Confirmed live in the last 24 hours
Anthropic
Job Description
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Role
We are looking for researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for models to interact with private user data. In this role, you'll design and implement privacy-preserving techniques, audit our current techniques, and set the direction for how Anthropic handles privacy more broadly.
Responsibilities:
- Lead our privacy analysis of frontier models, carefully auditing the use of data and ensuring safety throughout the process
- Develop privacy-first training algorithms and techniques
- Develop evaluation and auditing techniques to measure the privacy of training algorithms
- Work with a small, senior team of engineers and researchers to enact a forward-looking privacy policy
- Advocate on behalf of our users to ensure responsible handling of all data
You may be a good fit if you have:
- Experience working on privacy-preserving machine learning
- A track record of shipping products and features inside a fast-moving environment
- Strong coding skills in Python and familiarity with ML frameworks like PyTorch or JAX.
- Deep familiarity with large language models, how they work, and how they are trained
- Have experience working with privacy-preserving techniques (e.g., differential privacy and how it is different from k-anonymity, l-diversity, and t-closeness)
- Experience supporting fast-paced startup engineering teams
- Demonstrated success in bringing clarity and ownership to ambiguous technical problems
- Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics
Strong candidates may also:
- Have published papers on the topic of privacy-preserving ML at top academic venues
- Prior experience training large language models (e.g., collecting training datasets, pre-training models, post-training models via fine-tuning and RL, running evaluations on trained models)
- Prior experience developing tooling to support privacy-preserving ML (e.g., differential privacy in TF-Privacy or Opacus)
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Logistics
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we r
Similar Jobs
Novartis
Head Data Privacy and AI Europe Legal
Mastercard
Senior Counsel, Privacy, AI & Data Responsibility
Mastercard
Director, AI & Data Strategy - Programme Manager / Solution Architect for Privacy Enhancing Technologies
CIBC
Director, Global Privacy AI & Strategic Digital Initiatives
S&P Global
Assistant General Counsel, Head of Privacy, Data & AI - Mobility
Collective Health