Senior AI Engineer, GenAI & ML Evaluation Frameworks - Grafana Ops, AI/ML | USA | Remote
Confirmed live in the last 24 hours
Grafana Labs
Job Description
Grafana Labs is a remote-first, open-source powerhouse. There are more than 20M users of Grafana, the open source visualization tool, around the globe, monitoring everything from beehives to climate change in the Alps. The instantly recognizable dashboards have been spotted everywhere from a NASA launch and Minecraft HQ to Wimbledon and the Tour de France. Grafana Labs also helps more than 3,000 companies -- including Bloomberg, JPMorgan Chase, and eBay -- manage their observability strategies with the Grafana LGTM Stack, which can be run fully managed with Grafana Cloud or self-managed with the Grafana Enterprise Stack, both featuring scalable metrics (Grafana Mimir), logs (Grafana Loki), and traces (Grafana Tempo).
We’re scaling fast and staying true to what makes us different: an open-source legacy, a global collaborative culture, and a passion for meaningful work. Our team thrives in an innovation-driven environment where transparency, autonomy, and trust fuel everything we do.
You may not meet every requirement, and that’s okay. If this role excites you, we’d love you to raise your hand for what could be a truly career-defining opportunity.
This is a remote opportunity and we would be interested in applicants from USA time zones only at this time.
Senior Engineer – GenAI & ML Evaluation Frameworks
The Opportunity:
At Grafana, we build observability tools that help users understand, respond to, and improve their systems – regardless of scale, complexity, or tech stack. The Grafana AI teams play a key role in this mission by helping users make sense of complex observability data through AI-driven features. These capabilities reduce toil, lower the barrier of domain expertise, and surface meaningful signals from noisy environments.
We are looking for an experienced engineer with expertise in evaluating Generative AI systems, particularly Large Language Models (LLMs), to help us build and evolve our internal evaluation frameworks, and/or integrate existing best-of-breed tools. This role involves designing and scaling automated evaluation pipelines, integrating them into CI/CD workflows, and defining metrics that reflect both product goals and model behavior. As the team matures, there’s a broad opportunity to expand or redefine this role based on impact and initiative.
What You’ll Be Doing:
- Design and implement robust evaluation frameworks for GenAI and LLM-based systems, including golden test sets, regression tracking, LLM-as-judge methods, and structured output verification.
- Develop tooling to enable automated, low-friction evaluation of model outputs, prompts, and agent behaviors.
- Define and refine metrics for both structure and semantics, ensuring alignment with realistic use cases and operational constraints.
- Lead the development of dataset management processes and guide teams across Grafana in best practices for GenAI evaluation.
What Makes You a Great Fit:
- Experience designing and implementing evaluation frameworks for AI/ML systems.
- Familiarity with prompt engineering, structured output evaluation, and context-window management in LLM systems.
- High autonomy to collaborate and translate team goals into clear, testable criteria supported by effective tooling.
Bonus Points For:
- Experience working in environments with rapid iteration and experimental development.
- A pragmatic mindset that values reproducibility, developer experience, and thoughtful trade-offs when scaling GenAI systems.
- A passion for minimizing human toil and building AI systems that actively support engineers.
Compensation & Rewards:
In the United States, the Base compensation range for this role is USD 154,445 - USD
Similar Jobs
MetroStar Systems
Sr. DevSecOps Engineer I (6406)
Scout Motors
Senior DataOps Engineer
Bill.com
Senior Data Warehouse Engineer
Colovore
Senior Data Center Operations Engineer
Hudl
Senior MLOps Engineer - Hudl Focus
Prolific