Back to Search
Overview
Senior

Senior Data Engineer

Confirmed live in the last 24 hours

Roku

Roku

Bengaluru, India
Hybrid
Posted March 30, 2026

Job Description

Teamwork makes the stream work.

 

Roku is changing how the world watches TV

Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers.

From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines.

 

 

About the team

The mission of Roku's Data Engineering team is to develop a world-class big data platform so that internal and external customers can leverage data to grow their businesses. Data Engineering works closely with business partners and Engineering teams to collect metrics on existing and new initiatives that are critical to business success. As Senior Data Engineer working on Device metrics, you will design data models & develop scalable data pipelines to capturing different business metrics across different Roku Devices.

 

About the role

Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetise large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and Roku TV™ models are available around the world through direct retail sales and licensing arrangements with TV brands and pay-TV operators.With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly available, fault-tolerant, big data platform is critical for our success.This role is based in Bangalore, India and requires hybrid working, with 3 days in the office.

 

What you'll be doing

  • Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse
  • Build quality data solutions and refine existing diverse datasets to simplified data models encouraging self-service
  • Build data pipelines that optimise on data quality and are resilient to poor quality data sources
  • Own the data mapping, business logic, transformations and data quality
  • Low level systems debugging, performance measurement & optimization on large production clusters
  • Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects
  • Maintain and support existing platforms and evolve to newer technology stacks and architectures

 

We're excited if you have

  • Extensive SQL Skills
  • Proficiency in at least one scripting language, Python is required
  • Experience in big data technologies like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, etc.
  • Proficiency in data modeling, including designing, implementing, and optimizing conceptual, logical, and physical data models to support scalable and efficient data architectures.
  • Experience with AWS, GCP, Looker is a plus
  • Collaborate with cross-functional teams such as developers, analysts, and operations to execute deliverables
  • 5+ years professional experience as a data Engineer 
  • BS in Computer Science; MS in Computer Science preferred
  • AI Literacy / AI growth mindset

 

 

Our Hybrid Work Approach<

pythongorustawsgcpaidataproductdesignsales