Back to Search
Overview
Senior

Senior Machine Learning Engineer, Perception

Confirmed live in the last 24 hours

Anduril Industries

Anduril Industries

Washington, District of Columbia, United States
On-site
Posted April 3, 2026

Job Description

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

ABOUT THE TEAM

Anduril's Lattice software platform integrates many sensors into a single cohesive view of the battlefield, providing the context our users need to make better decisions, faster. Perception Engineers on Anduril's Frontier AI team build edge-compatible systems that power these views, transforming complex multi-sensor inputs into accurate, actionable understanding of the world. We integrate ambitious machine learning and computer vision algorithms directly into the Lattice platform to increase the fidelity, accuracy, and usefulness of battlefield awareness. These systems improve the experience of Anduril's existing customers, aid in the scaling of our business to new customers, use-cases, and business lines, and they create high-quality snapshots of the world that autonomous and agentic systems can operate on top of.

WHAT YOU'LL DO

  • Design, train, and deploy computer vision and perception models for edge-
    compatible, mission-critical environments
  • Build multi-sensor perception systems that fuse imagery, video, and other sensor
    data into coherent views of the battlefield
  • Improve the fidelity, accuracy, and robustness of Lattice's understanding of objects,
    activities, and environments across varied operational conditions
  • Integrate state-of-the-art machine learning algorithms into production systems used
    by customers in real-world scenarios
  • Develop mission-relevant evaluation frameworks, datasets, and benchmarks to
    measure perception performance in the field
  • Partner closely with software, autonomy, product, and deployment teams to turn
    research ideas into operational capabilities
  • Identify new perception and battlefield-understanding problems where advances in
    computer vision can unlock major product impact

REQUIRED QUALIFICATIONS

  • BS, MS, or PhD in Computer Science, Robotics, Electrical Engineering, Machine
    Learning, or a related technical field
  • 5+ years of experience developing computer vision, perception, or machine learning
    systems in production or advanced research settings
  • Strong experience with deep learning, object detection, and object tracking
    frameworks
  • Strong programming skills in Python, with the ability to build reliable research and
    production workflows
  • Strong desire to invent, implement, test, and deploy novel techniques that vastly
    improve upon state-of-the-art public methods to solve research problem specific to
    the defense space
  • Experience training, evaluating, and iterating on vision models for detection, tracking,
    segmentation, classification, or sensor fusion tasks
  • Experience deploying or optimizing ML systems for constrained, real-time, or edge
    compute environments
  • Ability to work across the full lifecycle of applied ML, from problem formulation and
    data strategy through deployment and performance monitoring
  • Eligible to obtain and maintain an active U.S. Secret security clearance

PREFERRED QUALIFICATIONS

  • Advanced degree with a focus on computer vision, perception, robotics, or machine
    learning
  • Experience with multi-modal or multi-sensor fusion systems
  • Experience deploying deep learning models to embedded, edge, or air-gapped
    environments
  • Familiarity with real-time perception systems for autonomous platforms, defense
    applications, or other safety-critical systems
  • Experience building large-scale datasets, labeling pipelines, or benchmarking
    infrastructure for vision systems
  • Experience working in high-own
pythongorustmachine learningaiiosdataproductdesign