Back to Search






Mid-Level
Data Platform DevOps Engineer
Confirmed live in the last 24 hours
NXP Semiconductors
Bangalore
On-site
Posted April 1, 2026
Job Description
We are seeking an experienced Data Platform Engineer to join our core platform team. In this role, you will be responsible for building, securing, and automating our enterprise Data Platform on AWS. You will go beyond basic pipeline creation by designing and maintaining the underlying infrastructure and CI/CD frameworks that enable our data teams to operate and scale efficiently.
Job Responsibilities
• Cloud & Platform Infrastructure (IaC): Deploy and maintain Databricks workspaces and AWS infrastructure (VPC, PrivateLink, IAM, S3, Lambda, EKS, and Fargate) using Terraform.
• Unity Catalog Implementation: Automate the governance layer, including metastore configuration, external locations, and access controls within Unity Catalog.
• Security & Compliance: Ensure the platform adheres to enterprise security standards by managing implementing cloud infrastructure and data protection automated security controls.
• Workspace Lifecycle Management: Use Terraform for end-to-end workspace provisioning, ensuring consistent setup across Dev, Acc, and Prod environments.
• Governance & Cost Control (Policies): Design and implement policies and guardrails to enforce standards
• Identity & Access Automation: Automate assignment of permissions using Terraform. Manage Service Principals for pipelines and map groups to specific Workspace roles and Unity Catalog grants.
• DevOps & Automation (CI/CD)
• Pipeline Architecture: Oversee GitLab CI/CD pipelines for data projects, transitioning the team from manual notebook deployments to automated workflows.
• Databricks Asset Bundles (DABs): Standardize deployment strategies using DABs. Develop templates and presets for Data Engineers to deploy jobs and workflows.
• Release Management: Implement branching strategies, code review policies, and environment promotion rules (Dev → Acc → Prod).
• Service Organization & Operations
• Observability: Configure monitoring, alerting, and logging (using system tables or integration with tools like CloudWatch) to ensure platform stability.
• Support & Incident Management: Serve as an escalation point for platform-related incidents.
• Knowledge Sharing: Document best practices and conduct workshops to upskill data engineers on effective platform usage.
Job Qualifications:
• Bachelor’s in computer science, software engineering, mathematics, or related field.
• 5+ years industry experience in Data Engineering, Cloud Infrastructure, or DevOps; 3+ years with Databricks in enterprise settings.
• Advanced Terraform skills for managing Cloud infrastructure and Databricks resources
• Extensive AWS portfolio knowledge
• Expertise in CI/CD pipelines using GitLab CI and Databricks Asset Bundles.
• Deep understanding of Databricks Lakehouse architecture, Unity Catalog, Serverless Compute, Delta Lake, and Workflow orchestration.
• Solid grasp of SDLC/DataOps, including unit testing, modular code, and Git strategies.
• Proficient in Python (e.g., automation, PySpark, pandas) and Bash/Shell scripting for CI/CD.
• Excellent communication, documentation, mentoring, and collaboration skills.
• Preferred: Databricks Certified Data Engineer Professional or AWS Solutions Architect certification.#LI-29f4
devopsdata
Similar Jobs
SpaceX
Kubernetes Platform Infrastructure Engineer (Starlink)
Mid-LevelRedmond, WA
Capstone Investment Advisors
Data Platform Engineer
Mid-LevelNew York, New York
SentinelOne
Staff Platform Engineer - Developer Platform, DevEx
StaffPrague, Czech Republ...
SentinelOne
Staff Platform Engineer - Developer Platform, DevEx
StaffBrno, South Moravian...
SentinelOne
Staff Platform Engineer - Developer Platform, DevEx
StaffCzech Republic
SentinelOne
Staff Platform Engineer - Developer Platform, DevEx
StaffSlovakia