System Engineer, GPU Fleet
Confirmed live in the last 24 hours
FluidStack
Job Description
About Fluidstack
At Fluidstack, we build the compute, data centers, and power that will fuel artificial superintelligence. We supply GWs of compute capabilities to the world’s biggest AI Labs at industry-defining speeds.
Our team is small, fast, and obsessed with quality. We own outcomes end-to-end, challenge assumptions, and treat our customers' problems as our own. No task is beneath anyone here.
There are a few thousand people who will shape the trajectory of superintelligence. Come and be one of them.
About the Role
As a System Engineer, GPU Fleet, you will manage, operate, and optimize hyperscale GPU compute infrastructure supporting AI/ML training and inference workloads. Ensure high availability, performance, and reliability of GPU server fleet through automation, monitoring, troubleshooting, and collaboration with hardware engineering, platform teams, and datacenter operations.
Focus
Operate and maintain large-scale GPU server fleet (H100, B200, GB200) supporting AI/ML workloads; monitor system health, performance, and utilization to maximize uptime and ensure SLA compliance
Perform hands-on troubleshooting and root cause analysis of complex hardware, firmware, OS, and application issues across GPU clusters; coordinate with vendors and hardware teams to resolve systemic failures
Develop and maintain automation scripts for provisioning, configuration management, monitoring, and remediation at scale.
Build and improve tooling for GPU health checks, performance diagnostics, driver validation, and automated recovery
Execute server provisioning, configuration, firmware updates, and OS installation using automation frameworks; manage lifecycle operations including deployment, maintenance, and decommissioning
Participate in 24x7 on-call rotation; respond to production incidents and coordinate resolution with cross-functional teams including datacenter operations, network engineering, and application teams
Lead post-incident reviews, document root causes, and drive continuous improvement initiatives focused on automation, reliability, monitoring, and operational efficiency
Basic Qualifications
Bachelor's degree in Computer Science, Engineering, or related technical field (or equivalent practical experience)
3+ years (System Engineer) or 5+ years (Senior System Engineer) in Linux system administration, datacenter operations, or infrastructure engineering
Strong Linux/Unix fundamentals including system administration, shell scripting (Bash, Python), troubleshooting, and performance tuning
Experience with server hardware architecture, troubleshooting techniques, and understanding of compute, memory, storage, and networking components
Experience in automation and configuration management tools (Ansible, Puppet, Chef, Terraform).
Strong analytical and problem-solving skills with ability to diagnose complex technical issues under pressure
Excellent communication and collaboration skills; ability to work effectively with cross-functional teams
Preferred Qualifications
Experience managing large-scale GPU infrastructure (NVIDIA H100, A100, B200, GB200) in production environments supporting AI/ML workloads
Deep knowledge of GPU architecture, CUDA toolkit, GPU drivers, monitoring tools (nvidia-smi, DCGM)
Experience with HPC cluster management, job schedulers (Slurm, PBS, LSF), and container orchestration (Kubernetes, Docker)
Proficiency in out-of-band management protocols (IPMI, Redfish, BMC) and firmware management for server hardware
Experience with high-performance networking (InfiniBand, RoCE, RDMA) and network troubleshooting in GPU cluster environments
Familiarity with datacenter operations including rack installations, cabling, power management, and thermal considerations
Salary & Benefits
Competitive total compensation package (salary + equity).
Retirement or pension plan, in line with local norms.
Health, dental, and vision insurance.
Generous PTO policy, in line with local norms.
The base salary range for this position is $200,000 - $300,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options.
We are committed to pay equity and transparency.
Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email careers@fluidstack.io with your resume/CV, the role you've applied for, and the date you submitted your application-- someone from our recruiting team will be in touch.
Similar Jobs
T-Mobile
Engineer, System Architecture - AI Enabled Automation
Caterpillar
Autonomy System Engineer
Caterpillar
Autonomy System Engineer
Amazon Data Services, Inc.
Sr. System Development Engineer, Cloud AI/ML/storage server teams
Annapurna Labs (U.S.) Inc.
Software Engineer ML Acceleration, Annapurna Labs ML Acceleration System Software
Amazon Web Services, Inc.