Back to Search
Overview
Senior

Sr./Staff Applied AI Engineer - Multimodal Transformers

Confirmed live in the last 24 hours

Kodiak Robotics

Kodiak Robotics

Compensation

$200,000 - $260,000/year

San Francisco Bay Area
On-site
Posted April 15, 2026

Job Description

Kodiak Robotics, Inc. was founded in 2018 and has become a leader in autonomous ground transportation committed to a safer and more efficient future for all. The company has developed an artificial intelligence (AI) powered technology stack purpose-built for commercial trucking and the public sector. The company delivers freight daily for its customers across the southern United States using its autonomous technology. In 2024, Kodiak became the first known company to publicly announce delivering a driverless semi-truck to a customer. Kodiak is also leveraging its commercial self-driving software to develop, test and deploy autonomous capabilities for the U.S. Department of Defense.

Kodiak's autonomy stack is built on AI that fuses diverse sensor streams into a unified, actionable understanding of the world. We are developing GigaFusionNet – a large-scale multimodal transformer that learns rich, joint representations across camera, LiDAR, and radar through attention-based fusion. We are looking for engineers to push the boundaries of how transformer architectures combine and reason over heterogeneous sensor data.This role is open to all levels – from those eager to contribute to cutting-edge research to experts driving innovation at scale.

In this role, you will:
  • Design and develop multimodal transformer architectures that fuse camera, LiDAR, and radar into unified representations
  • Research and implement cross-modal attention mechanisms, token fusion strategies, and efficient multi-stream tokenization
  • Build scalable training pipelines for large-scale multimodal transformers across massive real-world datasets
  • Explore self-supervised and contrastive pretraining objectives that learn transferable multimodal representations
  • Optimize transformer models for real-time inference under latency and compute constraints
What you’ll bring:
  • BS, MS, or PhD in AI, Computer Science, or a related field, or at least 2-3 years of industry experience
  • Experience with transformer architectures, particularly in multimodal or multi-stream settings
  • Familiarity with cross-attention, token fusion, or modality alignment techniques
  • Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow
  • Strong understanding of scalable training for large models, including distrib
pythongoaidatadesign