Member of Technical Staff, AI Systems Engineer
Confirmed live in the last 24 hours
Microsoft
Job Description
We are building next-generation customized AI silicon designed to accelerate AI workloads with unprecedented efficiency. We are looking for an exceptional Systems Engineer to bridge the gap between our custom hardware and modern AI inference frameworks.
We build foundational AI infrastructure that enables large-scale training and inference across diverse workloads and rapidly evolving hardware generations. Our work directly shapes how AI systems are designed, deployed, and scaled today and into the future. Engineers on this team operate with end-to-end ownership, deep technical rigor, and a strong bias toward real-world impact.
Microsoft Superintelligence team’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.This role is part of Microsoft AI's Superintelligence Team. The MAIST is a startup-like team inside Microsoft AI, created to push the boundaries of AI toward Humanist Superintelligence—ultra-capable systems that remain controllable, safety-aligned, and anchored to human values. Our mission is to create AI that amplifies human potential while ensuring humanity remains firmly in control. We aim to deliver breakthroughs that benefit society—advancing science, education, and global well-being.We’re also fortunate to partner with incredible product teams giving our models the chance to reach billions of users and create immense positive impact. If you’re a brilliant, highly-ambitious and low ego individual, you’ll fit right in—come and join us as we work on our next generation of models!
The Role
As a Senior AI Systems Engineer, you will own the software integration layer between our custom AI chip's proprietary SDK and SGLang, a state-of-the-art serving framework for Large Language Models (LLMs) and Vision-Language Models. You will be responsible for ensuring that our silicon can seamlessly run SGLang inference workloads at peak performance, bypassing the traditional CUDA ecosystem entirely.
Responsibilities
- Framework Integration: Architect and develop the backend integration to make our custom AI chip a first-class citizen in SGLang.
- Custom Operator Development: Write custom C++ / PyTorch extensions that map SGLang’s primitive operations (e.g., RadixAttention, FlashAttention, matrix multiplications) to our custom chip's proprietary software layer.
- Performance Optimization: Profile and optimize end-to-end LLM inference latency, throughput, and memory utilization (Paged Attention) on our hardware.
- Cross-Functional Collaboration: Work closely with our hardware architecture and compiler teams to provide feedback on our custom software stack and silicon design based on framework-level bottlenecks.
- Testing & Deployment: Build robust testing pipelines to validate model accuracy and performance parity against standard GPU baselines.
Qualifications
- BS, MS, or PhD in Computer Science, Computer Engineering, or a related field.
- Software engineering experience focusing on systems programming, ML infrastructure, or AI compilers.
- Expertise in Python: Deep understanding of memory management, concurrent programming.
- Experience with LLM Inference Engines: Hands-on experience modifying or extending frameworks like SGLang, vLLM, DeepSpeed-FastGen, or TensorRT-LLM.
- PyTorch Internals: Strong experience writing PyTorch C++ extensions and custom operators.
- Hardware Interfacing: Proven track record of integrating machine learning workloads with hardware accelerators (GPUs, TPUs, NPUs) using custom SDKs, APIs, or low-level drivers.
- Prior experience working on non-CUDA software ecosystems (e.g., AMD ROCm, AWS Neuron, Google XLA).
- Familiarity with AI compilers and intermediate representations (MLIR, Apache TVM, OpenAI Triton).
- Strong understanding of underlying LLM architectures (Transformers, MoE) and state-of-the-art attention algorithms (FlashAttention v2/v3).
- Previous experience at an AI silicon startup or working on custom accelerators (e.g., Google TPU, AWS Trainium).
This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.
Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Similar Jobs
Microsoft
Member of Technical Staff, AI Platform Engineer - Response Quality
xAI
Member of Technical Staff - ML & Data Infrastructure
Micron
Senior Member of Technical Staff, AI-optimized Memory SoC Microarchitect
Microsoft
Member of Technical Staff - Data Infrastructure Manager - Microsoft AI - Copilot
Microsoft
Member of Technical Staff, AI Systems Engineer - Microsoft Superintelligence
xAI