Back to Search
Overview
Principal

Principal Engineer, AI Security

Confirmed live in the last 24 hours

Lila Sciences

Lila Sciences

Compensation

$171,000 - $230,534/year

Cambridge, MA USA
On-site
Posted April 16, 2026

Job Description

Your Impact at LILA

As a Principal Security Engineer focused on AI Security, you will define and drive the technical strategy for securing how AI is used across Lila's enterprise. You will operate as a senior individual contributor, partnering with IT and business teams to ensure safe and compliant adoption of AI tools and platforms.

While Lila builds AI-powered systems, this role is primarily focused on securing the use of third-party and internally deployed AI tools across the enterprise — ensuring sensitive data, intellectual property, and scientific workflows are protected as AI becomes deeply embedded in how work gets done.

What You'll Be Building

  • Enterprise AI Security Strategy — Define and implement security controls and guardrails for the use of AI tools (e.g., LLM APIs, SaaS AI platforms, and internal AI services) across the organization.
  • AI Gateway & Agentic Gateway Security — Design and implement AI gateway controls to manage and monitor access to external and internal AI systems. Secure agentic workflows by enforcing identity, authorization, tool-use constraints, and policy controls for autonomous or semi-autonomous agents.
  • AI Red Teaming & Adversarial Testing — Conduct red teaming and adversarial testing focused on enterprise AI usage, including prompt injection, data exfiltration, jailbreaks, and abuse of connected tools and plugins.
  • Data Protection for AI Usage — Develop and enforce controls to prevent sensitive data leakage through AI systems, including input/output filtering, data classification, tokenization, and secure handling of prompts, embeddings, and outputs.
  • Multi-Layer AI Security (Network, Endpoint, Data) — Integrate AI security into existing enterprise security layers: network visibility and control over AI service access, API traffic inspection, and zero trust enforcement; endpoint security for developer machines, research environments, browsers, and plugins; data layer controls ensuring proper handling of sensitive data when interacting with AI systems.
  • AI Threat Modeling (Enterprise Context) — Develop threat models focused on enterprise AI usage, including risks such as data leakage, prompt injection, model misuse, supply chain risks from AI vendors, and unauthorized agent actions.
  • Vendor & Platform Security — Assess and guide secure adoption of third-party AI vendors and platforms, including evaluating data handling practices, model behavior, and integration risks.
  • Incident Response for AI Usage — Define and support response approaches for AI-related incidents, such as sensitive data exposure, policy violations, or misuse of AI tools.
  • Cross-Functional Technical Leadership — Partner with Legal, Compliance, IT, and Engineering to align AI usage with regulatory requirements, data governance policies, and responsible AI practices.
  • Security Enablement — Contribute to internal guidance and education on safe AI usage, including secure prompting, data handling, and appropriate use of AI tools.
  • Security Tooling & Implementation — Evaluate and implement tooling for AI security, including AI gateways, DLP integrations, monitoring solutions, and policy enforcement mechanisms.

What You’ll Need to Succeed

  • 8+ years of experience in information security, with strong expertise in enterprise, cloud, or application security.
  • Hands-on experience designing and implementing security controls in enterprise environments.
  • Familiarity with AI/ML systems and how modern AI tools (LLMs, copilots, APIs) are used in practice.
  • Experience with cloud platforms (AWS/GCP), SaaS security, and zero trust architectures.
  • Experience with data protection technologies (e.g., DLP, data classification, access controls).
  • Practical experience with threat modeling, red teaming, or adversarial testing.
  • Strong communication and influence skills across technical and non-technical stakeholders.

Bonus Points For

  • Experience securing enterprise use of LLMs, copilots, or generative AI platforms.
gorustawsgcpaiiosdatadesign