Back to Search
Overview
Principal

Principal Engineer, Compute Platform

Confirmed live in the last 24 hours

Pinterest

Pinterest

Compensation

$242,634 - $499,541/year

San Francisco, CA, US; Remote, US
Remote
Posted March 27, 2026

Job Description

About Pinterest:

Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we’re on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.

Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other’s unique experiences and embrace the flexibility to do your best work. Creating a career you love? It’s Possible.

At Pinterest, AI isn't just a feature, it's a powerful partner that augments our creativity and amplifies our impact, and we’re looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities, we’ll explore your foundational skills and how you collaborate with AI.

Through our interview process, what matters most is that you can always explain your approach, showing us not just what you know, but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.

Pinterest serves over 600 million users through sophisticated visual and social capabilities which connect inspiration, advertisement, and shopping. Compute Platform provides the underlying compute capabilities to run jobs and processes for all of the systems and workloads needed behind the scenes to create the best experience for our users and advertisers.  This includes distributed processing, data systems, search, experimentation, monetization, AI/ML for ranking and recommendations, GenAI, and internal systems.

We are looking for a Principal Engineer who can lead and scale the consolidation and modernization of this infrastructure under what we call PinCompute, with an emphasis on some of the largest and most challenging stateful workloads, as well as GPU-heavy AI workloads. The scale and scope of the effort will require designing and building around Kubernetes and solving its scaling limitations, handling stateful systems and data-intensive workloads, formalizing mechanisms to stack and bin pack workloads, working with multiple internal customers and giving them migration paths, and working through ambiguous and unforeseen situations which arise from workload requirements, production and operability requirements, and unique multi-tenancy challenges.


What you'll do:

  • Solving the challenges of replacing isolated pools of dedicated compute resources with a very large scale shared compute platform, shifting from machine-based designs to container-based designs.
  • Working with leads across various platforms, especially stateful and data platforms, to build the right features and migration paths that work for them.
  • Owning and driving up utilization on the shared compute platform by designing and implementing workload stacking, optimizing and bin packing, safe oversubscription, etc.
  • Work with multiple customers with unique requirements to make sure the platform will address their needs and is not only a viable but a desirable solution for running their workloads.
  • Leading a group of engineers around design topics, execution, trade offs, migration paths, observability, performance, and operability for the platform.
  • Evolving the platform towards a multi-cloud abstraction layer to enable running workloads across multiple cloud providers.
  • Being a role model for setting a high bar for production quality and engineering excellence in delivering a foundational technology which empowers the entire company.
  • Working closely with partners around capacity planning, cost visibility, fungibility of virtual machine instance types, and efficiency.
  • Putting special focus on the delivery of GPU resources through the platform, to enable and expedite AI workloads.
  • Leverage AI tools to increase the velocity and ease of migrations, and create self service solutions for the customers of the platform as needed
  • Help the team apply AI to the operational aspects of running the cluster, discovering issues, and investigating and root causing issues.
  • Expedite feature development using AI coding tools and be a thought leader on creating the right balance between speed and safety by designing safeguards
awskubernetesaidataproductdesign