Back to Search
Overview
Senior

Senior Data Engineer - Product

Confirmed live in the last 24 hours

Feedzai

Feedzai

Portugal
Remote
Posted March 26, 2026

Job Description

Feedzai is the world’s first RiskOps platform for financial risk management, and the market leader in safeguarding global commerce with today’s most advanced cloud-based risk management platform, powered by machine learning and artificial intelligence. Feedzai is securing the transition to a cashless world while enabling digital trust in every transaction and payment type. The world’s largest banks, processors, and retailers trust Feedzai to protect trillions of dollars and manage risk while improving the customer experience for everyday users, without compromising privacy. Feedzai is a Series D company and has raised $282M to date. With a valuation of $2 billion, our technology protects 1 billion consumers and 90 billion transactions each year.

The Engineering (Tech) Team is responsible for all Feedzai product development. Together with Product Management and Data Science, we build the next generation of tools to catch fraud in real-time with a machine learning first approach. Formed by engineers and managed by engineers, at Feedzai, you will find one of the most talented teams out there, from junior to senior engineers.

We are fast-paced and provide a safe, open, and collaborative environment that encourages us to lean in, try new things and discover our potential with continuous learning for everyone.

While building the best value for our customers, you will work with a wide range of technical challenges. Such as building distributed systems that need to operate 24/7 and ultra-low latencies, solving UI/UX problems to help fraud analysts to fight fraud more efficiently. In addition, designing extensive databases from relational, NoSQL and graphs, validate and develop new data science techniques and algorithms.

Feedzai’s Pulse Engineering organization powers the Risk Engine that evaluates transactions in real-time based on strategies configured by data scientists and fraud analysts. At the same time, the Datascience Framework (DSF) provides a platform for clients to design, test, and promote these strategies into production. DSF runs complex, high-volume data workloads - Spark jobs on EMR or Kubernetes, Hadoop ecosystem components (HDFS/YARN), data ingestion pipelines via Firehose and Glue into S3, and interactive workflows through JupyterLabs and the DS API.

You:

As a Senior Data Engineer in Pulse Engineering, you will ensure the stability, scalability, and performance of DSF, working at the intersection of distributed systems, big data engineering, and developer experience. Your work will ensure that DS workloads run reliably, efficiently, and at scale, directly supporting the Risk Engine in production.

You are a Big Data specialist with deep hands-on expertise in Apache Spark and distributed data systems. You know how to tune jobs, troubleshoot cluster behaviour, and design scalable data workflows. At the same time, you’re a software engineer at heart — someone who writes clean, maintainable code, understands APIs, and can build platform components in Java with first-principles engineering discipline. You take pride in operating what you build, debugging complex distributed systems, and enabling data scientists and analysts with a platform that is reliable, predictable, and performant.

Our philosophy across Engineering includes the following key ideas, which you will have a key role promoting in your team: 

  • Teams operate with a “you build it, you run it” DevOps mindset, taking end-to-end ownership of development, deployment, and operations. You'll drive a culture focused on automation, observability, and operational excellence, enabling continuous delivery with confidence.
  • Architecture evolves towards a decoupled, microservice-based architecture, positioning it to scale efficiently in a multi-tenant, cloud-native environment. This effort is central to Feedzai’s long-term vision and product evolution

Your Day-to-Day

  • Re-architect and scale existing big data processing components powering DSF.
  • Analyse workload patterns (Spark jobs, notebook activity, DS API usage) and drive performance, reliability, and cost improvements.
  • Ensure stability of Spark jobs running on EMR or Kubernetes clusters.
  • Operate and evolve Hadoop ecosystem components (
pythonjavagorustawskubernetesmachine learningaidevopsdata