Back to Search
Overview
Staff

Staff Security and AI Engineer

Confirmed live in the last 24 hours

Spring Health

Spring Health

San Francisco, CA (Hybrid)
Hybrid
Posted March 28, 2026

Job Description

Our mission: to eliminate every barrier to mental health.

At Spring Health, we’re on a mission to revolutionize mental healthcare by removing every barrier that prevents people from getting the help they need, when they need it. Our clinically validated technology, Precision Mental Healthcare, empowers us to deliver the right care at the right time—whether it’s therapy, coaching, medication, or beyond—tailored to each individual’s needs.

We proudly partner with over 450 companies, from startups to multinational Fortune 500 corporations, as a leading provider of mental health service, providing care for 10 million people. Our clients include brands you use and know like Microsoft, Target, and Delta Airlines, all of whom trust us to deliver best-in-class outcomes for their employees globally. With our innovative platform, we’ve been able to generate a net positive ROI for employers and we are the only company in our category to earn external validation of net savings for customers.

We have raised capital from prominent investors including Generation Investment, Kinnevik, Tiger Global, Northzone, RRE Ventures, and many more. Thanks to their partnership and our latest Series E Funding, our current valuation has reached $3.3 billion. We’re just getting started—join us on our journey to make mental healthcare accessible to everyone, everywhere.

We are actively seeking a Staff AI Security Engineer to join our team. Reporting to the CISO, you will define and evolve our AI security strategy to protect highly sensitive mental health data across both product and corporate environments. 

Please note that this is a hybrid role based in San Francisco, with an expectation to be in the office 2–3 days per week at our 2 Embarcadero Ctr. location. Candidates must be based in the San Francisco metro area or able to relocate independently within 90 days of their start date. Occasional travel will be required for team on-sites.

What you’ll do

  • Define and evolve our AI security strategy to protect highly sensitive mental health data across both product and corporate environments
  • Lead secure design and threat modeling for AI systems including LLMs, agentic workflows, and retrieval pipelinesIdentify and mitigate risks such as prompt injection, data exfiltration, model abuse, and privilege escalation
  • Build scalable AI security guardrails and tooling that enable safe experimentation across engineering and business teams
  • Establish AI-specific governance frameworks covering identity, access control, auditability, and observability
  • Take ownership of and lead our AI Red Team to proactively identify vulnerabilities
  • Design and implement AI observability pipelines to detect anomalous model behavior and policy violations in near real-time
  • Develop and operationalize AI incident response playbooks to ensure rapid containment of security eventsPartner with product and engineering teams to enable responsible AI innovation in a hyper-growth environment
  • Champion a culture of secure AI development by mentoring engineers and defining high standards for the organization

What success looks like in this role

  • 80% of new AI product features are threat modeled prior to GA
  • 80% of AI features are tested by the AI Red Team or equivalent adversarial testing before GA
  • Achieve >=70% coverage of production AI features with automated LLM vulnerability testingGrow participation in the AI Red Team by 10% YoY
  • gorustaidataproductdesign