Back to Search
Overview
Mid-Level

Embedded AI Engineer – Android Automotive (On-Device Intelligence)

Confirmed live in the last 24 hours

Applied Intuition

Applied Intuition

Compensation

$150,000 - $250,000/year

Sunnyvale, California, United States
Remote
Posted April 9, 2026

Job Description

About Applied Intuition

Applied Intuition, Inc. is powering the future of physical AI. Founded in 2017 and now valued at $15 billion, the Silicon Valley company is creating the digital infrastructure needed to bring intelligence to every moving machine on the planet. Applied Intuition services the automotive, defense, trucking, construction, mining and agriculture industries in three core areas: tools and infrastructure, operating systems, and autonomy. Eighteen of the top 20 global automakers, as well as the United States military and its allies, trust the company’s solutions to deliver physical intelligence. Applied Intuition is headquartered in Sunnyvale, California, with offices in Washington, D.C.; San Diego; Ft. Walton Beach, Florida; Ann Arbor, Michigan; London; Stuttgart; Munich; Stockholm; Bangalore; Seoul; and Tokyo. Learn more at applied.co.

We are an in-office company, and our expectation is that employees primarily work from their Applied Intuition office 5 days a week. However, we also recognize the importance of flexibility and trust our employees to manage their schedules responsibly. This may include occasional remote work, starting the day with morning meetings from home before heading to the office, or leaving earlier when needed to accommodate family commitments.

About the role

We are building on-device intelligence for a next-generation Android Automotive platform. This role owns the end-to-end lifecycle of embedded ML systems, ensuring models behave predictably and safely in production environments under real-world constraints such as latency, thermal limits, and functional safety.

At Applied Intuition, you will:

  • Deploy and run production-grade ML inference and learning systems on Android Automotive (AAOS)

  • Implement on-device multimodal LLMs, including schema design and safe dispatch to local vehicle APIs

  • Integrate models using TensorFlow Lite, ONNX Runtime, or specialized vendor SDKs

  • Profile and optimize models for strict latency, memory, power, and thermal budgets

  • Instrument runtime performance across CPU, GPU, and NPU acceleration layers

  • Design safety boundaries and guardrails for model outputs, including tool-call allowlists and fallback logic

  • Interface directly with vehicle signals, sensors, and system services using C++ and JNI

We’re looking for someone who has:

  • BS, MS, or PhD in Computer Science, Electrical Engineering, or a related technical field

  • 3+ years of experience shipping ML inference on embedded, mobile, or automotive platforms

  • Strong proficiency in C++ and experience with native Android integration (JNI)

  • Expertise in model optimization techniques such as quantization, pruning, and compilation

  • Experience integrating LLM function calling or tool execution with structured outputs

  • Hands-on experience with Android system services or Android Automotive OS (AAOS)

  • Deep understanding of edge constraints including real-time behavior and memory pressure

Nice to have:

  • nodegorustawsaimobileandroiddataproductdesign