Back to Search




Senior
Sr. ML Performance Engineer, AWS Neuron, Annapurna Labs
Confirmed live in the last 24 hours
Amazon Development Centre Canada ULC
Toronto, ON, CAN
On-site
Posted October 24, 2025
Job Description
The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon’s custom machine learning accelerators, Inferentia and Trainium.
The Product: The AWS Machine Learning accelerators (Inferentia/Trainium) offer unparalleled ML inference and training performances. They are enabled through state-of-the-art software stack - the AWS Neuron Software Development Kit (SDK). This SDK comprises an ML compiler, runtime, and application framework, which seamlessly integrate into popular ML frameworks like PyTorch. AWS Neuron, running on Inferentia and Trainium, is trusted and used by leading customers such as Snap, Autodesk, and Amazon Alexa.
The Team: Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered over the last few years.
Within this ecosystem, the Neuron Compiler team is developing a deep learning compiler stack that takes state of the art LLM, Vision, and multi-modal models created in frameworks such as TensorFlow, PyTorch, and JAX, and makes them run performantly on our accelerators. The team is comprised of some of the brightest minds in the engineering, research, and product communities, focused on the ambitious goal of creating a toolchain that will provide a quantum leap in performance.
The Neuron team is hiring systems and compiler engineers in order to solve our customers toughest problems. Specifically, the performance team in Toronto is focused on analysis and optimization of system-level performance of machine learning models on AWS ML accelerators. The team conducts in-depth profiling and works across multiple layers of the technology stack - from frameworks and compilers to runtime and collectives - to meet and exceed customer requirements while maintaining a competitive edge in the market. As part of the Neuron Compiler organization, the team not only identifies and implements performance optimizations but also works to crystallize these improvements into the compiler, automating optimizations for broader customer benefit.
This is an opportunity to work on cutting-edge products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish cutting-edge research, and mentor a brilliant team of experienced engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. It is a very unique learning culture. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators.
Explore the product and our history!
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html
https://aws.amazon.com/machine-learning/neuron/
https://github.com/aws/aws-neuron-sdk
https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success
Key job responsibilities
Our performance engineers collaborate across compiler, runtime, and framework teams to optimize machine learning workloads for our global customer base. Working at the intersection of machine learning, high-performance computing, and distributed systems, you'll bring a passion for performance analysis, distributed systems, and machine learning. In this role, you will:
* Analyze and optimize system-level performance of machine learning models across the entire technology stack, from frameworks to runtime
* Conduct detailed performance analysis and profiling of ML workloads, identifying and resolving bottlenecks in large-scale ML systems
* Work directly with customers to enable and optimize their ML models on AWS accelerators, understanding their specific requirements and use cases
* Design and implement compiler optimizations, transforming manual performance improvements into automated compiler passes
* Collaborate across teams to develop innovative optimization techniques that enhance AWS Neuron SDK's performance capabilities
* Work in a startup-like development environment, where you’re always working on the most important stuff.
A day in the life
As you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements
The Product: The AWS Machine Learning accelerators (Inferentia/Trainium) offer unparalleled ML inference and training performances. They are enabled through state-of-the-art software stack - the AWS Neuron Software Development Kit (SDK). This SDK comprises an ML compiler, runtime, and application framework, which seamlessly integrate into popular ML frameworks like PyTorch. AWS Neuron, running on Inferentia and Trainium, is trusted and used by leading customers such as Snap, Autodesk, and Amazon Alexa.
The Team: Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered over the last few years.
Within this ecosystem, the Neuron Compiler team is developing a deep learning compiler stack that takes state of the art LLM, Vision, and multi-modal models created in frameworks such as TensorFlow, PyTorch, and JAX, and makes them run performantly on our accelerators. The team is comprised of some of the brightest minds in the engineering, research, and product communities, focused on the ambitious goal of creating a toolchain that will provide a quantum leap in performance.
The Neuron team is hiring systems and compiler engineers in order to solve our customers toughest problems. Specifically, the performance team in Toronto is focused on analysis and optimization of system-level performance of machine learning models on AWS ML accelerators. The team conducts in-depth profiling and works across multiple layers of the technology stack - from frameworks and compilers to runtime and collectives - to meet and exceed customer requirements while maintaining a competitive edge in the market. As part of the Neuron Compiler organization, the team not only identifies and implements performance optimizations but also works to crystallize these improvements into the compiler, automating optimizations for broader customer benefit.
This is an opportunity to work on cutting-edge products at the intersection of machine-learning, high-performance computing, and distributed architectures. You will architect and implement business-critical features, publish cutting-edge research, and mentor a brilliant team of experienced engineers. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. It is a very unique learning culture. The team works closely with customers on their model enablement, providing direct support and optimization expertise to ensure their machine learning workloads achieve optimal performance on AWS ML accelerators.
Explore the product and our history!
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html
https://aws.amazon.com/machine-learning/neuron/
https://github.com/aws/aws-neuron-sdk
https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success
Key job responsibilities
Our performance engineers collaborate across compiler, runtime, and framework teams to optimize machine learning workloads for our global customer base. Working at the intersection of machine learning, high-performance computing, and distributed systems, you'll bring a passion for performance analysis, distributed systems, and machine learning. In this role, you will:
* Analyze and optimize system-level performance of machine learning models across the entire technology stack, from frameworks to runtime
* Conduct detailed performance analysis and profiling of ML workloads, identifying and resolving bottlenecks in large-scale ML systems
* Work directly with customers to enable and optimize their ML models on AWS accelerators, understanding their specific requirements and use cases
* Design and implement compiler optimizations, transforming manual performance improvements into automated compiler passes
* Collaborate across teams to develop innovative optimization techniques that enhance AWS Neuron SDK's performance capabilities
* Work in a startup-like development environment, where you’re always working on the most important stuff.
A day in the life
As you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements
gorustawsmachine learningaiproductdesign
Similar Jobs
Annapurna Labs (U.S.) Inc.
Firmware Engineer, Annapurna Labs, ML Acceleration - Performance Instrumentation & Developer Tools
Mid-LevelCupertino, CA, USA
Annapurna Labs (U.S.) Inc.
ML Kernel Performance Engineer, AWS Neuron, Annapurna Labs
Mid-LevelCupertino, CA, USA
Amazon Development Centre Canada ULC
ML Kernel Performance Engineer, AWS Neuron, Annapurna Labs
Mid-LevelToronto, ON, CAN
Amazon Data Services, Inc.
Sr Hardware Development Engineer, High Performance AI & ML Servers
SeniorSeattle, WA, USA$143,300 - $257,300/year