Machine Learning Engineer, Platform Integrations
Confirmed live in the last 24 hours
Twelve Labs
Job Description
Who we are
At Twelve Labs, we are pioneering the development of cutting-edge multimodal foundation models that have the ability to comprehend videos just like humans do. Our models have redefined the standards in video-language modeling, empowering us with more intuitive and far-reaching capabilities, and fundamentally transforming the way we interact with and analyze various forms of media.
With a remarkable $107 million in Seed and Series A funding, our company is backed by top-tier venture capital firms such as NVIDIA’s NVentures, NEA, Radical Ventures, and Index Ventures, and prominent AI visionaries and founders such as Fei-Fei Li, Silvio Savarese, Alexandr Wang and more. Headquartered in San Francisco, with an influential APAC presence in Seoul, our global footprint underscores our commitment to driving worldwide innovation.
We are a global company that values the uniqueness of each person’s journey. It is the differences in our cultural, educational, and life experiences that allow us to constantly challenge the status quo. We are looking for individuals who are motivated by our mission and eager to make an impact as we push the bounds of technology to transform the world. Join us as we revolutionize video understanding and multimodal AI.
About the Role
TwelveLabs builds frontier multimodal foundation models for video understanding. Our models are deployed across a growing set of Cloud Service Provider (CSP) and data platforms — each with different compute hardware, ML inference stacks, and runtime constraints.
You'll own the model-level engineering that makes this possible. This means optimizing TwelveLabs models for scalable, reliable, and performant inference across heterogeneous environments — designing how video decode pipelines, tensor orchestration, and model components behave on different hardware and inference engines. Every new platform is a new systems design problem at the model layer.
You'll also design and implement massively distributed model inference systems for multimodal inputs, working across varied ML inference stacks — from hardware accelerators (NVIDIA, Trainium, Inferentia) to inference engines (vLLM, FriendliAI) and orchestrators (Ray, Anyscale). Your work directly determines how fast, how reliably, and at what cost TwelveLabs models serve inference at scale.
In this role, you will
Optimize TwelveLabs' video foundation models for deployment on model inference platforms across public clouds (AWS, Azure, GCP, OCI) and data platforms (Databricks, Snowflake)
Conduct experiments to benchmark and optimize model performance across inference stacks — measuring latency, throughput, and cost across different accelerator and serving configurations
Collaborate with platform partner engineering teams as a peer to resolve inference-level technical challenges and inform how their infrastructure evolves to support multimodal workloads
Work closely with TwelveLabs' core ML research team to ensure model architecture decisions account for multi-platform deployment requirements
You may be a good fit if you have
8+ years building ML systems in production, with deep experience in model serving, inference optimization, capacity planning, and GPU compute
Deep understanding of the full model inference stack — from model weights and tensor operations through serving runtimes to accelerator hardware
Designed production services using Python, Postgres, FastAPI, SQLAlchemy, Pydantic (and friends)
Strong hands-on experience with cloud infrastructure (AWS, GCP or Azure), Docker, Kubernetes, and distributed systems in real-world environments — specifically in the context of ML inference and model hosting capabilities
Defined technical roadmap and prioritization for large, ambiguous, cross-functional projects, driving high-impact technical decisions
Preferred Qualifications
Direct experience working with cloud provider partner teams to scale infrastructure or products across multiple platforms — navigating differences in networking, security, billing, and managed service offerings
Similar Jobs
Checkr
Staff Applied AI Engineer
Roku
Senior Machine Learning Engineer
Motional
Staff Software Engineer, Cloud Apps
Motional
Principal Engineer Tech Lead Manager, ML Acceleration
Motional
Senior Machine Learning Engineer, Data Mining
Motional