Back to Search
Overview
Senior

Senior Compliance Engineer, AI Governance

Confirmed live in the last 24 hours

True Anomaly

True Anomaly

Compensation

$145,000 to $195,000

Denver, CO or Long Beach, CA or Washington, DC
Hybrid
Posted April 29, 2026

Job Description

Space is a warfighting domain. True Anomaly seeks those with the talent and ambition to build the technology that secures it.

OUR MISSION

True Anomaly delivers decisive capabilities for space superiority. We build autonomous spacecraft, advanced payloads, mission software, and space-based interceptors — enabling the U.S. and its Allies to secure the space environment and counter threats from the ultimate high ground.

OUR VALUES

  • Be the offset. We create asymmetric advantages with creativity and ingenuity.
  • What would it take? We challenge assumptions to deliver ambitious results.
  • It’s the people. Our team is our competitive advantage and we are better together.

Your Mission 

We are seeking a rare combination of disciplines: an experienced Sr. Compliance Engineer with deep AI Subject Matter Expertise (SME) and export compliance background to join our Governance, Risk, and Compliance (GRC) team. This role is responsible for building, implementing, and sustaining the organizational compliance posture across key regulatory and security frameworks — with a primary emphasis on RMF (NIST 800-53 Rev. 5 + Classified Overlays), CMMC Level 3, NIST 800-171 Rev. 3, EAR/ITAR cyber regulations, and — critically - the governance, risk management, and compliance controls surrounding AI/ML systems and large language models (LLMs) deployed across the enterprise. 

As AI becomes embedded in True Anomaly's operations, mission systems, and products, this role serves as the organizational authority on how AI capabilities are adopted, audited, and controlled responsibly. You will architect and operationalize compliance checkpoints and governance gates within LLM pipelines, evaluate AI vendors and platforms (including OpenAI, Anthropic Claude, and others) against classified and unclassified compliance requirements, and ensure AI-driven workflows satisfy both regulatory obligations and internal risk tolerance. 

The ideal candidate brings deep GRC knowledge, hands-on AI/LLM engineering fluency, and the ability to engage credibly with compliance assessors, government partners, and internal AI/ML engineering teams alike. 

 

Responsibilities 

 

Compliance Program Execution 

  • Lead and support compliance assessment readiness across key organizational frameworks including NIST SP 800-171 Rev. 2 and 3, CMMC Level 3, NIST SP 800-53 Rev. 5, and the NIST Cybersecurity Framework (CSF). 
  • Provide direction on cybersecurity readiness to address EAR and ITAR-related controls and requirements. 
  • Drive CMMC readiness activities across the organization, including scoping, gap analysis, control implementation validation, evidence collection, and pre-assessment preparation. 
  • Review, maintain, and mature System Security Plans (SSPs) to accurately reflect organizational control implementations, system boundaries, and operational practices — including AI/ML system boundaries and data flows. 
  • Manage Plans of Actions and Milestones (POA&Ms), tracking open findings to resolution, communicating status to GRC leadership, and coordinating remediation efforts across responsible teams. 
  • Conduct internal compliance audits and control effectiveness reviews to ensure ongoing adherence to applicable frameworks and to surface emerging gaps before external assessments. 
  • Maintain audit-ready evidence repositories and documentation packages, ensuring traceability between controls, evidence, and framework requirements. 

 

AI Governance, Risk & Compliance (AI-GRC) 

  • Serve as the organizational AI compliance SME — the primary authority on how AI/LLM systems (including OpenAI GPT models, Anthropic Claude, open-source models, and internally developed models) are evaluated, onboarded, and continuously governed within True Anomaly's compliance boundaries. 
  • Design, implement, and maintain compliance checkpoints and enforcement gates within LLM pipelines, including:  
  • Input/output filtering and content policy enforcement layers 
  • Prompt injection detection and mitigation controls 
  • Data classification guardrails to prevent CUI, ITAR-controlled, or classified data from flowing into non-authorized AI systems or endpoints 
  • Automated audit logging of AI interactions for traceability and incident investigation 
  • Model access control and role-based permissions within AI platforms 
  • Conduct AI-specific risk assessments, including evaluation of AI vendor data handling practices, model training data provenance, and third-party AI API security postures against NIST AI RMF, NIST SP 800-53 AI overlays, and internal standards. 
  • Develop and enforce an AI System Acceptable Use Policy and supporting standards that govern how employees and systems interact with LLMs, including permissible data inputs, output handling, human-in-the-loop requirements, and escalation procedures. 
  • Evaluate proposed AI/ML use cases for regulatory risk (EAR/ITAR, CMMC, data privacy) and provide compliance go/no-go determinations with documented rationale. 
  • Collaborate with AI/ML engineers and DevSecOps teams to integrate compliance gates into CI/CD pipelines and MLOps workflows, ensuring model changes and prompt changes undergo review before production deployment. 
  • Maintain an AI system inventory, tracking all deployed models, APIs, integrations, and associated risk and compliance status. 
  • goawsazureaimobiledataproductdesign