Back to Search
Overview
Mid-Level

AI Infrastructure Engineer

Confirmed live in the last 24 hours

Intercom

Intercom

Berlin, Germany
Hybrid
Posted April 17, 2026

Job Description

Intercom is the AI Customer Service company on a mission to help businesses provide incredible customer experiences. 

Our AI agent Fin, the most advanced customer service AI agent on the market, lets businesses deliver always-on, impeccable customer service and ultimately transform their customer experiences for the better. Fin can also be combined with our Helpdesk to become a complete solution called the Intercom Customer Service Suite, which provides AI enhanced support for the more complex or high touch queries that require a human agent. 

Founded in 2011 and trusted by nearly 30,000 global businesses, Intercom is setting the new standard for customer service. Driven by our core values, we push boundaries, build with speed and intensity, and consistently deliver incredible value to our customers.

What's the opportunity?

We’re looking for Senior+ AI Infrastructure Engineers to build the systems that train and serve Intercom’s next generation of AI products.

Intercom is an AI company that builds from the GPU all the way up to a user agent that resolves millions of customer service queries a month. 

You’ll join a small, highly technical team working at the cutting edge of modern AI infrastructure. The AI Infra team built the training pipelines and runs the inference for custom models like Fin Apex, which outperforms frontier models in customer service tasks, and is the foundation of the AI Group's full stack approach to AI.

We’re particularly interested in engineers who have:

  • A track record of working on model training or model inference at scale, or on low‑level GPU coding (e.g. CUDA, Triton). Experience with one is great, multiple is even better.

What will I be doing?

As a Senior AI Infrastructure Engineer focused on model training and inference, you will:

  • Implement and scale training pipelines for large transformer and LLM models, from data ingestion and preprocessing through distributed training and evaluation.
  • Build and optimize inference services that deliver low‑latency, high‑reliability experiences for our customers, including autoscaling, routing, and fallbacks.
  • Work on GPU‑level performance: tuning kernels, improving utilization, and identifying bottlenecks across our training and inference stack.
  • Collaborate closely with ML scientists to implement cutting edge training and inference methods and bring them to production.
  • Play an active role in hiring, mentoring, and developing other engineers on the team.
  • Raise the bar for technical standards, reliability, and operational excellence across Intercom’s AI platform.

Profile we’re looking for:

These are indicative, not hard requirements

We’re looking to hire Senior+ AI Infrastructure Engineers. You’re likely a great fit if:

  • You have 5+ years of experience in software engineering, with a strong track record of shipping high‑quality products or platforms.
  • You hold a degree in Computer Science, Computer Engineering, or a related field (or you have equivalent experience with very strong fundamentals).
  • You have hands‑on experience with one or more of the following:
    • Model training (especially transformers and LLMs).
    • Model inference at scale (again, especially transformers and LLMs).
    • Low‑level GPU work, such as writing CUDA or Triton kernels.
  • Comfortable working in production environments at meaningful scale (traffic, data, or organizational).
  • You communicate clearly, can explain complex technical topics to different audiences, and enjoy close collaboration with both engineers and non‑engineers.
  • You take pride in strong technical fundamentals, love learning, and are will
pythonjavagorustawskubernetesaidataproduct