Back to Search
Overview
Senior

Senior Data DevOps Engineer

Confirmed live in the last 24 hours

PTC

PTC

ROM-Bucharest, PO
On-site
Posted May 8, 2026

Job Description

Our world is transforming, and PTC is leading the way. Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business. 

Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow – all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible.  

As a Data DevOps Engineer, you will design, build, and operate the core platforms, tooling, and automation that power our data engineering ecosystem across Central Data Operations (CDOPS). Your mission is to ensure data engineers can move fast without sacrificing reliability, security, or observability. 

You will own the runtime platforms (especially Apache Airflow and Power BI/Microsoft Fabric), container images, CI/CD pipelines, and developer workflows that support data ingestion, transformation, and analytics. 

This role sits at the intersection of data engineering and DevOps, with a strong focus on production excellence, operational rigor, and developer experience (DX). 

Your work will directly enable BI, AI, and Analytics teams by providing stable, wellinstrumented, and easytouse data platforms. 

 

DaytoDay Responsibilities 

Data Platform Architecture & Engineering 

  • Build, maintain, and evolve Apache Airflow platform used by the CDOPS data team. 

  • Develop and version custom Airflow Docker images, including dependency management, plugins, and secure base images. 

  • Define best practices for DAG structure, retries, SLAs, backfills, and failure handling. 

  • Define best practices for AIRFLOW and our Data Warehouse development process 

  • Improve orchestration scalability, resilience, and upgrade strategies. 

 

CI/CD & Automation for Data Platforms 

  • Design and operate CI/CD pipelines for:  

  • Airflow DAG validation, testing, and deployment 

  • Container image builds and releases 

  • Configuration promotion across environments 

  • Enforce Gitbased workflows, automated checks, and policyascode for data platforms. 

  • Reduce deployment friction while improving safety, traceability, and rollback capabilities. 

Platform Reliability, Operations & DevEx 

  • Architect and run containerized data systems using Docker and Docker Compose. 

  • Debug and operate Linuxbased systems (CPU, memory, I/O, networking, DNS). 

  • Improve developer experience through:  

  • Local development tooling 

  • Platform templates and automation 

  • Clear documentation and onboarding paths 

  • Eliminate toil and manual operational work through automation. 

 

Monitoring, Observability & Incident Management 

  • Design and operate platformlevel observability using Prometheus and Grafana. 

  • Define and maintain metrics, dashboards, and alerting standards for data platforms. 

  • Lead incident response, rootcause analysis, and postincident improvements. 

  • Maintain runbooks, system diagrams, and operational documentation. 

Collaboration & CrossFunctional Support 

  • Partner with Data Engineering, BI, and AI teams to design robust data flows and operational solutions. 

  • Translate developers’ needs into scalable platform solutions. 

  • Provide technical leadership through code reviews, design reviews, and architectural discussions. 

  • Mentor engineers in productiongrade data systems, DevOps practices, and operational excellence. 

 

 

Preferred Skills & Knowledge 

  • Python skills for automation and platform tooling. 

  • Expert knowledge of Docker (multistage builds, debugging, Compose) and Linux systems. 

  • Handson experience with Apache Airflow architecture and operations. 

  • Proven experience building and maintaining CI/CD pipelines. 

  • Expertise in monitoring and observability tooling (Prometheus, Grafana). 

  • Strong Gitbased development practices and familiarity with InfrastructureasCode concepts. 

  • Exposure to Snowflake, dbt, Microsoft Fabric, Terraform, AIRBYTE, Apache Hop, Hashicorp Vault is a plus. 

  • Clear communicator with strong systemlevel thinking. 

 

Preferred Experience 

  • Running Airflow in production at scale (Celery or Kubernetes executors). 

  • Designing monitoring architectures with Prometheus and grafana 

  • Experience in SaaS environments. 

  • Managing multicontainer applications, secure secrets handling, and production troubleshooting. 

  • Willingness and motivation to progressively take on Data Engineering responsibilities as part of career growth. 

Basic Qualifications 

  • Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent experience. 

  • 3+ years handson experience in Data Engineering, Platform Engineering or DevOps roles. 

  • English fluency 

  • Proven experience operating productiongrade data platforms. 

  • Strong understanding of cloud environments, Linux systems, and data pipeline orchestration. 

  • Demonstrated success in supporting crossfunctional data teams. 

 

 

 

 

Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. 

If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us?

We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here."

devopsdata