Back to Search
Overview
Mid-Level

Data Engineer, Infrastructure FinOps

Confirmed live in the last 24 hours

Anduril Industries

Anduril Industries

Compensation

$146,000 - $194,000/year

Costa Mesa, California, United States
On-site
Posted April 3, 2026

Job Description

Anduril Industries is a defense technology company with a mission to transform U.S. and allied military capabilities with advanced technology. By bringing the expertise, technology, and business model of the 21st century’s most innovative companies to the defense industry, Anduril is changing how military systems are designed, built and sold. Anduril’s family of systems is powered by Lattice OS, an AI-powered operating system that turns thousands of data streams into a realtime, 3D command and control center. As the world enters an era of strategic competition, Anduril is committed to bringing cutting-edge autonomy, AI, computer vision, sensor fusion, and networking technology to the military in months, not years.

ABOUT THE TEAM

Anduril’s Infrastructure Engineering organization provides the digital foundations for all aspects of our business. We are
evolving into a more data-driven approach for designing, building, and managing the critical infrastructure that ingests, transforms, and serves massive datasets to power analytics, machine learning, and intelligent applications.

ABOUT THE JOB

We are seeking a highly skilled Data Engineer with a strong software engineering foundation to help build and scale our next-generation data platform. This is a high-impact role where you will have the autonomy to architect robust systems that unlock critical business insights from complex, large-scale data

WHAT YOU’LL DO

Design and Build Scalable Data Pipelines: Architect, develop, and maintain robust and efficient ETL/ELT pipelines to
process petabyte-scale data from a wide variety of sources using platforms like Google BigQuery , Databricks, and
Palantir Foundry .
Apply Software Engineering Principles: Treat data infrastructure as a software product. Develop reusable
frameworks, libraries, and tools to accelerate data pipeline development and ensure reliability , using best practices like
version control, code reviews, and CI/CD.
Architect and Model Data: Design and implement well-structured, performant data models and schemas in our data
warehouse and data lakehouse environments, optimized for both analytical querying and application use.
Ensure Data Quality and Reliability: Implement comprehensive data quality checks, monitoring, and alerting to
proactively identify and resolve data issues. Troubleshoot and optimize performance bottlenecks in data processing and
storage layers.
Be a Force Multiplier: Collaborate closely with data scientists, analysts, software engineers, and business stakeholders

REQUIRED QUALIFICATIONS

  • 5+ years of experience in a data engineering or a similar software engineering role, ideally building data products in a fast-paced environment.
  • Strong programming proficiency in Python.
  • Hands-on expertise with at least one major cloud data platform such as Google BigQuery, Databricks, or an enterprise data system like Palantir Foundry.
  • Expert-level SQL and query optimization skills on large-scale datasets.
  • Solid understanding of data modeling concepts, ETL/ELT design patterns, and data warehousing/lakehouse principles.
  • Experience with a major cloud environment (GCP, AWS, Azure), including its security ecosystem and containerization technologies (Docker, Kubernetes).
  • Must be eligible to obtain and maintain a US Top Secret security clearance.

PREFERRED QUALIFICATIONS

  • Hands-on experience with data orchestration and transformation tools like Spark, PySpark, and dbt.
  • Experience with Infrastructure as Code (e.g., Terraform, CloudFormation) for managing data resources.
  • Knowledge of real-time data streaming technologies (e.g., Kafka, Flink, Pub/Sub).
  • Experience implementing CI/CD pipelines for data applications (e.g., GitHub Actions, Jenkins).
  • Experience with or a strong interest in learning how to develop data services and data products using modern software architecture and API design principles.
  • Familiarity with data visualization tools, such as Tableau
pythongorustawsgcpazurekubernetesdockermachine learningai