Back to Search
Overview
Mid-Level

Reinforcement Learning Engineer

Confirmed live in the last 24 hours

Weights & Biases

Weights & Biases

San Francisco, CA / Bellevue, WA / Remote- US
Hybrid
Posted March 25, 2026

Job Description

CoreWeave, the AI Hyperscaler™, acquired Weights & Biases to create the most powerful end-to-end platform to develop, deploy, and iterate AI faster. Since 2017, CoreWeave has operated a growing footprint of data centers covering every region of the US and across Europe, and was ranked as one of the TIME100 most influential companies of 2024. By bringing together CoreWeave’s industry-leading cloud infrastructure with the best-in-class tools AI practitioners know and love from Weights & Biases, we’re setting a new standard for how AI is built, trained, and scaled.

The integration of our teams and technologies is accelerating our shared mission: to empower developers with the tools and infrastructure they need to push the boundaries of what AI can do. From experiment tracking and model optimization to high-performance training clusters, agent building, and inference at scale, we’re combining forces to serve the full AI lifecycle — all in one seamless platform.

Weights & Biases has long been trusted by over 1,500 organizations — including AstraZeneca, Canva, Cohere, OpenAI, Meta, Snowflake, Square,Toyota, and Wayve — to build better models, AI agents and applications. Now, as part of CoreWeave, that impact is amplified across a broader ecosystem of AI innovators, researchers, and enterprises.

As we unite under one vision, we’re looking for bold thinkers and agile builders who are excited to shape the future of AI alongside us. If you're passionate about solving complex problems at the intersection of software, hardware, and AI, there's never been a more exciting time to join our team.

Our Team

The OpenPipe team at CoreWeave is building tools to help agents learn from experience. This is a critical step to make agents reliable enough to perform long tasks autonomously, in the same way human employees are. We’re systematically identifying and solving the major bottlenecks between today’s tech and those future self-improving agents. So far, we’ve:

  • Released ART, the easiest library for getting started with RL.
  • Developed RULER, a general-purpose reward function that works across many diverse tasks.
  • Built Serverless RL, an elegant API that gives RL practitioners full control over their data, environment and reward function while letting them outsource the headaches of managing GPU infrastructure.

These releases have a theme: we’re systematically tackling each major roadblock to successfully training self-improving agents. Several serious challenges remain. Building simulated environments often requires substantial human labor, and existing training methods are not data efficient enough. We're laser-focused on solving these problems and making self-improvement a reality for agent developers.

In startup terms, this is a classic hard-tech bet. Our roadmap involves substantial techn

gorustawskubernetesaiiosdataproduct