AI Engineer- Gen AI/SWE- Weights & Biases
Confirmed live in the last 24 hours
CoreWeave
Compensation
$188,000 - $275,000/year
Job Description
What You'll Do:
The AI team is a hands-on applied AI group at Weights & Biases that turns frontier research into teachable workflows. We collaborate with leading enterprises and the OSS community. We are the team that took W&B from a few hundred users to millions of users and one of the most beloved tools in the ML community. A senior applied role at the research-to-product boundary. You will design, implement, and evaluate LLM applications and agents with cutting-edge techniques from the latest research, then document and teach them to our community and customers. The focus is application, not novel research: rapid prototyping, careful evaluation, and production-grade reference implementations with clear trade-offs. We prioritize responsible, safe deployment and reproducibility.
About the role:
- Ship end-to-end GenAI workflows (prompting → RAG → tools/agents → eval → serve) with reproducible repos, W&B Reports, and dashboards others can run.
- Build agentic systems (tool use, function calling, multi-step planners) with MCP servers/clients and secure tool/resource integrations.
- Design evaluation harnesses (RAG/agent evals, golden sets, regression tests, telemetry) and drive continuous improvement via offline + online metrics.
- Build in public: Publish engineering artifacts (code, docs, talks, tutorials) and engage with OSS and customer engineers; turn repeated patterns into reusable templates.
- Partner with product/solutions to launch LLM-powered features with clear latency/cost/SLO targets and safety/guardrail checks.
- Run growth experiments to track the usage of the Weights & Biases suite of products from the artifacts built.
Who You Are:
- Software engineering: 6+ years building production systems; strong Python or TypeScript + system design, testing, CI/CD, observability.
- GenAI apps: shipped LLM-powered features (tools/agents/function calling), with measurable impact (latency/cost/reliability).
- Agentic patterns: implemented planners/executors, tool orchestration, sandboxing, and failure taxonomies; familiarity with agent infra concerns.
- RAG: pragmatic mastery of chunking, embeddings, vector/hybrid search, rerankers; experience with vector DBs/search indices and retrieval policy design.
- Evaluation: designed LLM/RAG/agent evals (offline golden sets, counterfactuals, user studies, guardrail tests); stats literacy (variance, CIs, power).
- Serving & productization: comfortable with queueing, caching, streaming, and cost controls; can debug latency at model, retrieval, and network layers.
- Public signal: 2+ substantial OSS repos/blog posts/talks/videos with adoption (stars, forks, downloads, views) and reproducible artifacts.
Preferred:
- Experience building with AI SDKs / agent frameworks (e.g., TypeScript/Python SDKs, planning libraries) and shipping developer-facing examples.
- Production agent security/sandboxing, red-teaming, and policy/PII enforcement.
- Operated eval platforms or built judge models/heuristics; experience leading metrics reviews with product/UX.
- Customer-facing enablement: templates or reference implementations adopted by external teams at scale.
Similar Jobs
Wells Fargo
Software Engineer - Full Stack Java, Spring, ReactJS, UI, Gen AI
Adobe
Senior Backend Engineer ( Gen AI )
Wells Fargo
Gen AI - Senior Software Automation Engineer
Wells Fargo
Senior Software Engineer - Gen AI
Warner Bros Discovery
Staff Software Engineer - Gen AI & ML(Growth & ML Team), Hyderabad
Citigroup