Back to Search
Overview
Mid-Level

Software Engineer II (Data Platform)

Confirmed live in the last 24 hours

G-P

G-P

India (Remote-First)
Remote
Posted April 21, 2026

Job Description

About Us

Our leading SaaS-based Global Employment Platform™ enables clients to expand into over 180 countries quickly and efficiently, without the complexities of establishing local entities. At G-P, we’re dedicated to breaking down barriers to global business and creating opportunities for everyone, everywhere.

Our diverse, remote-first teams are essential to our success. We empower our Dream Team members with flexibility and resources, fostering an environment where innovation thrives and every contribution is valued and celebrated.

The work you do here will positively impact lives around the world. We stand by our promise: Opportunity Made Possible. In addition to competitive compensation and benefits, we invite you to join us in expanding your skills and helping to reshape the future of work.

At G-P, we assist organizations in building exceptional global teams in days, not months—streamlining the hiring, onboarding, and management process to unlock growth potential for all.

About the Role

As a Software Engineer 2 on the Data Platform team, you are a key contributor to the architectural backbone of our AI-native platform. You don’t just build one-off pipelines; you help engineer the reliable frameworks and robust data engines that power every product feature and AI workflow across the company.

Working closely with Senior Engineers, you will develop the systems that ingest, process, and serve data from hundreds of internal services and massive event streams. You will ensure our Databricks Lakehouse remains a high-performance, automated, and trusted foundation for our global ecosystem.

What You Will Do

Build & Enhance Data Platform Systems

  • Design and develop core components of the data platform, including high-volume event ingestion pipelines and real-time processing workflows using Apache Spark.
  • Develop reusable data access layers and "framework-based" ETL/ELT processes that transform structured business data and event streams into trusted Lakehouse assets.
  • Contribute to the development of internal APIs and backend services that enable other engineering teams to interact with the data platform.

Engineer Reliable Data Flows

  • Build robust, fault-tolerant pipelines with a focus on data correctness, consistency, and availability.
  • Implement schema enforcement and validation gates to maintain strict data governance across systems, preventing "data swamp" scenarios.
  • Optimize code and query performance to ensure workloads are both fast and cost-efficient.

Drive Observability & Quality

  • Build and own the observability framework (logging, metrics, and alerting) to monitor pipeline health, data freshness SLAs, and system uptime.
  • Proactively identify and resolve bottlenecks in production; participate in root-cause analysis and incident response.
  • Apply rigorous automated testing (unit, integration, and regression) to all data code to ensure long-term maintainability.

Collaborative Platform Engineering

  • Participate in design discussions and code reviews, bringing a software engineering lens to data problems.
  • Follow governance and PII handl
pythongorustaibackendiosdataproductdesignsales