Back to Search
Overview
Senior

Senior Software Engineer (Bigdata - Platform)

Confirmed live in the last 24 hours

Roku

Roku

Bengaluru, India
Hybrid
Posted March 30, 2026

Job Description

Teamwork makes the stream work.

 

Roku is changing how the world watches TV

Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers.

From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines.

 

 

 

About the team

Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. 
Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. 


About the Role 

Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform? Are you an expert with Big Data Technologies? Have you looked under the hood of these systems? Are you interested in Open Source? If you answered “Yes” to these questions, this role is for you! 

 

What you will be doing 

  • You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority 
  • You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source 
  • You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team 

 

We are excited if you have 

  • 10+ years of production experience building big data platforms based upon Spark, Trino or equivalent 
  • Strong programming expertise in Java, Scala, Kotlin or another JVM language. 
  • A robust grasp of distributed systems concepts, algorithms, and data structures 
  • Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. 
  • Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc 
  • Extensive hands-on experience with public cloud AWS or GCP 
  • BS/MS degree in CS or equivalent
  • AI Literacy /AI growth mindset

 

 

 

 

Our Hybrid Work Approach

Roku fosters an inclusive and collaborative environment where teams work in the office Monday through Thursday. Fridays are flexible for remote work except for employees whose roles are required to b

javagorustawsgcpaidataproduct