Back to Search
Overview
Senior

Senior QA Analyst, AI & Machine Learning Systems

Confirmed live in the last 24 hours

Keeper Security

Keeper Security

Remote, US
Hybrid
Posted January 27, 2026

Job Description

Description

Keeper is hiring a talented Senior QA Analyst to join our AI & Threat Analytics team. This is a 100% remote position from select locations, with an opportunity to work a hybrid schedule for candidates based in the El Dorado Hills, CA or Chicago, IL metro areas.

Keeper’s cybersecurity software is trusted by millions of people and thousands of organizations globally. Keeper is published in 23 languages and sold in over 150 countries. Join one of the fastest-growing cybersecurity companies and help ensure the quality, safety, and reliability of Keeper’s next-generation AI features and threat analytics capabilities.

About Keeper

Keeper Security is transforming cybersecurity for organizations globally with zero-trust privileged access management built with end-to-end encryption. Keeper’s cybersecurity solutions are FedRAMP and StateRAMP Authorized, SOC 2 compliant, FIPS 140-2 validated, as well as ISO 27001, 27017 and 27018 certified. Keeper deploys in minutes, not months, and seamlessly integrates with any tech stack to prevent breaches, reduce help desk costs and ensure compliance. Trusted by millions of individuals and thousands of organizations, Keeper is the leader for password, passkey and secrets management, privileged access, secure remote access and encrypted messaging. Learn how our zero-trust and zero-knowledge solutions defend against cyber threats at KeeperSecurity.com.

About the Role

The Senior QA Analyst, AI & Threat Analytics will provide technical leadership for testing A.I and Machine Learning driven features across Keeper’s platform. This role focuses on validating the behavior, reliability, and quality of AI systems, including LLM-based summarization, classification, autofill, and behavior analysis pipelines. You will design and execute test strategies for nondeterministic systems, develop Python-based automation and evaluation tools, and validate model outputs, guardrails, and data flows. This role does not involve training models, but works closely with ML engineers, security teams, and product engineering to ensure AI features meet Keeper’s quality, safety, and performance standards in production environments.

Responsibilities

  • Design and execute test plans for A.I and Machine Learning driven features including classification, summarization, anomaly detection, and autofill systems
  • Develop Python-based automation and evaluation scripts to validate model performance, reliability, and output quality
  • Test LLM-based features for accuracy, guardrail effectiveness, hallucination risk, and behavior under edge cases
  • Validate data pipelines, input/output transformations, and model integration points across Keeper’s platform
  • Conduct regression testing on ML models and AI workflows to ensure reliability across releases
  • Test enterprise-scale behavior analysis features driven by data from ARAM, PAM, and session recordings
  • Validate security, privacy, and zero-trust requirements for AI model interactions and data handling
  • Collaborate with ML engineers and cross-functional teams to identify defects and improve model behavior
  • Analyze logs, metrics, and model outputs to detect anomalies, performance issues, or silent failures

Requirements

  • 3+ years of experience in QA, test automation, or evaluation of complex software systems
  • Hands-on experience writing Python automation or evaluation scripts for testing APIs, model outputs, or data pipelines
  • Understanding of machine learning concepts such as classification, summarization, embeddings, anomaly detection, or behavior analysis
  • Experience testing LLM-integrated features or AI-driven workflows
  • Familiarity with ML-related tooling such as Pytest, Jupyter Notebooks, Hugging Face libraries, or similar
  • Experience with API testing, JSON validation, and test data generation
  • Ability to test nondeterministic systems and evaluate model quality using structured methodologies
  • Strong analytical, troubleshooting, and communication skills
  • Experience testing cloud-integrated systems, preferably on AWS
  • Bachelor’s degree in Computer Science, Information Systems, or equivalent practical experience

Preferred Qualifications

pythongorustawsmachine learningaidataanalyticsproductdesign