Lead ML Inference Engineer, Advertising
Confirmed live in the last 24 hours
Roku
Job Description
Teamwork makes the stream work.
Roku is changing how the world watches TV
Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers.
From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines.
About the team
The Advertising Performance group focuses on performance for all participants in the Advertising ecosystem - Advertisers, Publishers, and Roku. The systems and solutions span multiple disciplines and technologies to perform real-time multi-objective optimization across distributed systems at large scale and with low latency. We use Machine Learning, Reinforcement Learning, AI, Control and Optimization Systems, and Auction Dynamics to solve a large set of complex problems. At the core of this is our Machine Learning and Inference Platform that powers the entire landscape.
About the role
In this role, you will architect, design, and lead the development of a SOTA Inference platform that can handle Advertising-level low latencies, scale, throughput, and availability with optimizations that span across hardware, software, and models. We’re looking for a strong technical leader with deep experience in ML serving, high-performance computing, and industry standard frameworks - someone excited to mentor engineers, innovate at scale, and shape the future of machine learning at Roku.
What you’ll be doing
- Lead the design and development of a SOTA Inference platform
- Oversee the development of monitoring, observability, and other tooling to ensure system and model performance, reliability, and scalability of online inference services
- Identify and resolve system inefficiencies, performance bottlenecks, and reliability issues, ensuring optimized end-to-end performance
- Stay at the forefront of advancements in inference frameworks, ML hardware acceleration, and distributed systems, and incorporate innovations where and when they are impactful
We’re excited if you have
- M.S. or above in CS, ECE, or a related field
- 10+ years of experience in developing and deploying large-scale, distributed systems, with at least 5 years in a leadership or technical lead role
- Strong programming skills in high-performance languages
- Deep understanding of inference frameworks and ML system deployment
- Proven experience optimizing performance for large-
Similar Jobs
Coca-Cola
Senior Manager, Automation
Sun Life
Principal Solution Designer - GenAI
Warner Bros Discovery
Staff Software Engineer - Gen AI & ML(Growth & ML Team), Hyderabad
Citigroup
Gen AI Tech Engineering Lead - Senior Vice President
Wells Fargo
Financial Accounting Associate
Citigroup