Back to Search
Overview
Junior

Senior Software Engineer II, Inference

Confirmed live in the last 24 hours

CoreWeave

CoreWeave

Sunnyvale, CA / Bellevue, WA
Hybrid
Posted April 1, 2026

Job Description

CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com.

What You’ll Do:

Senior engineers are area owners who lead designs, raise engineering standards, and deliver measurable improvements to latency, throughput, and reliability across multiple services. You’ll partner with product, orchestration, and hardware teams to evolve our Kubernetes-native inference platform and meet strict P99 SLAs at scale.

About the role:

  • Lead design reviews and drive architecture within the team; decompose multi-service work into clear milestones.
  • Define and own SLIs/SLOs; ensure post-incident actions land and reliability improves release-over-release.
  • Implement advanced optimizations (e.g., micro-batch schedulers, speculative decoding, KV-cache reuse) and quantify impact.
  • Strengthen incident posture: capacity planning, autoscaling policy, graceful degradation, rollback/traffic-shift strategies.
  • Mentor IC1/IC2 engineers; review cross-team designs and elevate coding/testing standards.
  • Own an area spanning multiple services and teams (e.g., request routing & adaptive scheduling, cost-per-token analytics, GPU resource isolation).

Who You Are:

  • ~ 5-8 years industry experience building distributed systems or cloud services. 
  • Strong coding in Python or Go (C++ a plus) and deep familiarity with networked systems and performance.
  • Optimize end-to-end ML system performance by developing and tuning CUDA kernels, reducing model latency, maximizing compute and memory bandwidth utilization, and leveraging custom accelerators for high-efficiency workloads
  • Hands-on experience with Kubernetes at production scale, CI/CD, and observability stacks (Prometheus, Grafana, OpenTelemetry). 
  • Practical knowledge of inference internals: batching, caching, mixed precision (BF16/FP8), streaming token delivery. 
  • Proven track record improving tail latency (P95/P99) and service reliability through metrics-driven work. 

Preferred:

  • Contributions to inference frameworks (vLLM, Triton, TensorRT-LLM, Ray Serve, TorchServe).
  • Experience with CUDA kernels, NCCL/SHARP, RDMA/NUMA, or GPU interconnect topologies.
  • Leading multi-team initiatives or partnering with customers on mission-critical launches.

Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren't a 100% skill or experience match.

Why CoreWeave?

At CoreWeave, we work hard, have fun, and move fast!  We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values: 

  • Be Curious at Your Core
pythongorustawskubernetesaidataanalyticsproductdesign