Do you want to help make the world safe from cyber attack?
At Corelight, we believe that the best approach to cybersecurity risk starts with the network. Attackers can evade endpoint detection, firewalls and many other technologies - but they can’t avoid leaving digital footprints on the networks they traverse. Built on open-source innovations from Zeek, Suricata and YARA and refined through years of real-world use, Corelight transforms network footprints from physical, virtual and cloud networks into actionable insights. Our customers use these insights to speed incident response and proactively hunt for threats.
As a Lead Cloud Infrastructure Engineer / Site Reliability Engineer (SRE), you will ensure the stability, performance, and security of our Federal region’s cloud platform. You’ll manage infrastructure and operations with a focus on availability, latency, performance optimization, monitoring, incident response, and capacity planning. This role requires maintaining a FedRAMP-compliant environment and working closely with teams to meet the highest standards of security and compliance.
We adopt an "everything as code" approach, leveraging automation and best practices to create an efficient, reliable, and scalable infrastructure. You will be instrumental in maintaining core infrastructure services that are robust, secure, and capable of processing high volumes of data seamlessly.
The successful candidate must be a U.S. citizen and may need to perform work that the U.S. government has specified can only be carried out by a U.S. citizen on U.S. soil.
Responsibilities
- Collaborate with software engineering teams to ensure the reliability, performance, and security of the Federal region’s infrastructure.
- Design, deploy, and scale AI/ML/LLM infrastructure across cloud platforms (AWS, Azure, or GCP) ensuring high reliability and performance.
- Manage and optimize Kubernetes environments (EKS, AKS, GKE) for AI services, data pipelines, and model operations.
- Build and automate end-to-end data and model pipelines for fine-tuning, inference, and RAG workloads using Terraform, Python, and CI/CD tooling.
- Utilize automation tools such as GitOps, CI/CD pipelines, and containerization technologies (Docker, Kubernetes) to streamline ML/LLM tasks across the Large Language Model lifecycle.
- Implement monitoring, observability, and reliability best practices using Prometheus, Grafana, ELK/EFK, Langfuse, and SLI/SLO/SLA frameworks.
- Participate in 24x7 on-call rotations, leading incident response, performance tuning, and cost optimization across SaaS Platform and production workloads
- Own infrastructure end to end, leading scaling initiatives, deployments, and automation, and providing technical leadership across the team
Qualifications/Requirements:
- pythongorustawsgcpazurekubernetesdockermachine learningai