Back to Search
Overview
Mid-Level

Applied AI Engineer, Inference

Confirmed live in the last 24 hours

Weights & Biases

Weights & Biases

Compensation

$165,000 - $242,000/year

San Francisco, CA/ Sunnyvale, CA / Bellevue, WA / Remote, US
Hybrid
Posted March 25, 2026

Job Description

CoreWeave, the AI Hyperscaler™, acquired Weights & Biases to create the most powerful end-to-end platform to develop, deploy, and iterate AI faster. Since 2017, CoreWeave has operated a growing footprint of data centers covering every region of the US and across Europe, and was ranked as one of the TIME100 most influential companies of 2024. By bringing together CoreWeave’s industry-leading cloud infrastructure with the best-in-class tools AI practitioners know and love from Weights & Biases, we’re setting a new standard for how AI is built, trained, and scaled.

The integration of our teams and technologies is accelerating our shared mission: to empower developers with the tools and infrastructure they need to push the boundaries of what AI can do. From experiment tracking and model optimization to high-performance training clusters, agent building, and inference at scale, we’re combining forces to serve the full AI lifecycle — all in one seamless platform.

Weights & Biases has long been trusted by over 1,500 organizations — including AstraZeneca, Canva, Cohere, OpenAI, Meta, Snowflake, Square,Toyota, and Wayve — to build better models, AI agents and applications. Now, as part of CoreWeave, that impact is amplified across a broader ecosystem of AI innovators, researchers, and enterprises.

As we unite under one vision, we’re looking for bold thinkers and agile builders who are excited to shape the future of AI alongside us. If you're passionate about solving complex problems at the intersection of software, hardware, and AI, there's never been a more exciting time to join our team.

What You'll Do

Description of the team:

The Inference team is responsible for delivering high-performance model serving capabilities that meet the needs of real production workloads. We work at the intersection of model behavior, serving systems, hardware, and customer requirements to improve throughput, latency, reliability, and quality across our inference stack.

About the role:

We are looking for an Applied AI Engineer to help us understand, measure, and improve the real-world performance of our inference platform. In the near term, this role will focus on building and running rigorous benchmarks, profiling model and system behavior, identifying bottlenecks, and driving targeted optimizations for both platform-wide and customer-specific workloads. This role is intentionally scoped around applied performance work in support of the Inference organization. Initial responsibilities center on benchmarking, optimization, and workload-driven research rather than broad ownership of frontier model research agendas. Over time, the scope of the role is expected to broaden as the team and product mature.

 

  • Build and maintain benchmarking workflows that measure latency, throughput, quality regressions, and cost across priority models and serving configurations.
  • Benchmark our inference stack against realistic customer workloads and external provider baselines to identify performance gaps and improvement opportunities.
  • Prof
pythongorustawsmachine learningaidataproductdesign