Back to Search
Overview
Staff

Staff SRE, Agentic AI

Confirmed live in the last 24 hours

Netskope

Netskope

Bengaluru, Karnataka, India
On-site
Posted March 20, 2026

Job Description

About Netskope

Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. 

Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive.  Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope.

About the role: 

Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience.

As a SRE MLOps, you will be critical to deploying and managing cutting-edge infrastructure crucial for AI/ML operations, and you will collaborate with AI/ML engineers and researchers to develop a robust CI/CD pipeline that supports safe and reproducible experiments. Your expertise will also extend to setting up and maintaining monitoring, logging, and alerting systems to oversee extensive training runs and client-facing APIs. You will ensure that training environments are optimally available and efficiently managed across multiple clusters, enhancing our containerization and orchestration systems with advanced tools like Docker and Kubernetes.

What’s in it for you

You will be critical to deploying and managing cutting-edge infrastructure for AI/ML operations. This means you won't just maintain existing systems; you will be building the foundational technology that powers our next generation of intelligent products.Your role is crucial to bridging the gap between research and production. If you thrive on solving complex distributed systems challenges and maximizing the efficiency of high-stakes AI workloads, this is the environment for you.

What you will be doing

  • Work closely with AI/ML engineers and researchers to participate in the designing and architecture of AI ML Applications for scale and reliability. Design and deploy a CI/CD pipeline that ensures safe and reproducible experiments.
  • Involve in production troubleshooting of AI ML Application code as well as infrastructure configurations. 
  • Set up and manage monitoring, logging, and alerting systems for extensive training runs and client-facing APIs.
  • Ensure training environments are consistently available and prepared across multiple clusters.
  • Develop and manage containerization and orchestration systems utilizing tools such as Docker and Kubernetes.
  • Operate and oversee large Kubernetes clusters with GPU workloads.
  • Improve reliability, quality, and time-to-market of our suite of software solutions
  • Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer needs, and innovating for continual improvement
  • Provide primary operational
pythongoawsazurekubernetesdockeraidataproductdesign