Engineering Manager - ML Platform and Infrastructure
Confirmed live in the last 24 hours
Applied Intuition
Compensation
$204,000 - $343,000/year
Job Description
About Applied Intuition
We are an in-office company, and our expectation is that employees primarily work from their Applied Intuition office 5 days a week. However, we also recognize the importance of flexibility and trust our employees to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home before heading to the office, or leaving earlier when needed to accommodate family commitments.
About the role
As an Engineering Manager on the ML Platform team, you'll lead a world-class group of engineers focused on building the infrastructure that powers Physical AI at scale. Your team will own three critical areas: Training & Inference Orchestration, where we build frameworks to efficiently schedule and run massive jobs across thousands of GPUs; GPU Cluster Architecture, where we design and scale what will be the largest GPU cluster for Physical AI in the industry; and Performance Optimization, where we push the limits of hardware utilization, throughput, and cost efficiency for large-scale training and inference workloads. You'll work at the intersection of systems engineering and ML, partnering directly with stack development and research teams to remove bottlenecks and accelerate the path from experimentation to production.
At Applied Intuition, you will:
- Grow and manage a team of world-class infrastructure and systems engineers with the goal of delivering a best-in-class ML platform for Physical AI
- Own the design and evolution of frameworks for orchestrating distributed training and inference jobs across thousands of GPUs
- Drive the buildout and scaling of our GPU cluster infrastructure, making critical decisions on architecture, scheduling, networking, and resource management
- Lead efforts to optimize training and inference performance — including throughput, fault tolerance, GPU utilization, and cost efficiency at scale
- Set team goals and roadmap in alignment with research milestones, model development timelines, and production deployment requirements
- Partner closely with research, stack development, and infrastructure teams to understand their workflows and accelerate their iteration speed
- Drive hiring, mentoring, and growth for a high-performing, mission-driven team
We’re looking for someone who has:
- 3+ years of engineering management experience, ideally leading infrastructure or platform teams
- Passion for building and leading high-performing teams that operate at the frontier of scale
- Deep experience with distributed systems, GPU computing, or large-scale ML infrastructure
- Direct experience building or operating large GPU clusters (1,000+ GPUs)
- Strong understanding of distributed training frameworks (e.g., PyTorch Distributed, Megatron-LM, DeepSpeed, FSDP) and job orchestration at scale
- Familiarity with GPU cluster management, high-performance networking (InfiniBand, RDMA), and resource scheduling (Slurm, Kubernetes)
- Track record of building and operating systems that run reliably at massive scale
Nice to have:
- Background in training optimization techniques such as mixed-precision training, pipeline/tensor/data parallelism, or checkpointing strategies
Similar Jobs
Roku
Sr Manager, SW Engineering - ML
Gartner
Manager, Software Engineering - ML, NLP, Gen AI
Capital One
Engineering Manager – Software & ML
Apple
Software Engineering Manager, ML Infrastructure - Creator Studio
ADCI - Karnataka