Back to Search
Overview
Lead / Manager

Solutions Architect - DevOps

Confirmed live in the last 24 hours

NVIDIA

NVIDIA

Australia, Remote
Remote
Posted April 1, 2026

Job Description

NVIDIA is looking for Senior Cloud Infrastructure and DevOps Solutions Architect to join its ANZ Team, specifically focused on working with our Neo Clouds and their customers on stand up and operational excellence. Ideally the successful candidate would be located in either Sydney or Melbourne. Enterprise, Government and Academic Research groups around the world are using NVIDIA products to redefine deep learning and data analytics, and to power data workloads. Be involved with the crew developing many of the largest and fastest AI/HPC systems in the world! We are looking for someone with the ability to work on a dynamic customer focused team that requires excellent interpersonal skills. This role will be interacting with Neo Clouds, customers, partners and various departments, to analyze, define and implement and operate large scale AI Operational projects. The scope of these efforts includes a combination of System Building, Kubernetes-based platforms, Automation, Hardware and Networking.

What you'll be doing:

  • Maintain large scale computational and AI infrastructure, focusing on monitoring, logging, workload orchestration (Kubernetes and Linux job schedulers).

  • Optimize scalable, production-ready Kubernetes-based container platforms coordinated with enterprise-grade networking and storage.

  • Serve as a key technical resource, develop, refine, and document standard methodologies and operational guidelines to be shared with internal teams.

  • Perform end-to-end resolving across the stack, from bare metal and operating system, through the software stack, container platform, networking, and storage.

  • Support Enterprise, Research & Development activities and engage in POCs/POVs to validate new features, architectures, and upgrade approaches.

  • Deploy monitoring solutions for the servers, network and storage with a focus on services performance and availability optimisations to meet requirements and SLAs.

  • Develop tooling to automate deployment and management of large-scale infrastructure environments, to automate operational monitoring and alerting, and to enable self-service consumption of resources.

  • Create and deliver high-quality documentation, including runbooks, onboarding materials, and best-practice guides for customers and internal teams.

  • Become the technical leader for assigned customer accounts, providing strategic guidance on DevOps and platform architecture and influencing long-term infrastructure and operations decisions.

What we need to see:

  • BS/MS/PhD in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, or related fields, with 5+ years of professional experience in managing scalable cloud environments and automation engineering roles.

  • Kubernetes & AI/ML Workloads: Extensive experience with Kubernetes for container orchestration, resource scheduling, scaling, and integration with HPC environments.

  • Cloud & HPC Expertise: Proven understanding of networking fundamentals (TCP/IP stack), data center architectures, and hands-on experience managing HPC/AI clusters, including deployment, optimization, and fixing issues.

  • Hardware & Software Knowledge: Familiarity with HPC and AI technologies (CPUs, GPUs, high-speed interconnects) and supporting software stacks.

  • Linux & Storage Systems: Deep knowledge of Linux (RedHat/CentOS, Ubuntu), OS-level security, and protocols (TCP, DHCP, DNS). Experience with storage solutions such as Lustre, GPFS, ZFS, XFS, and emerging Kubernetes storage technologies.

  • Automation & Observability: Proficiency in Python and Bash scripting, configuration management, and Infrastructure-as-Code tools (e.g., Ansible, Terraform). Experience with observability stacks (Grafana, Loki, Prometheus) for monitoring, logging, and building fault-tolerant systems.

  • Solution Architecture & Customer Engagement: Strong background in crafting scalable solutions and providing consultative support to customers.

Ways to stand out from the crowd:

  • Knowledge of CI/CD pipelines for software deployment and automation.

  • Solid hands-on knowledge of Kubernetes and container-based microservices architectures.

  • Experience with GPU-focused hardware and software (e.g., NVIDIA DGX, CUDA, GPU Operator).

  • Background with RDMA-based fabrics (InfiniBand or RoCE) in HPC or AI environments.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

devops