Back to Search





Mid-Level
Data Engineer, Global Procurement Technology
Confirmed live in the last 24 hours
Amazon Dev Center India - Hyderabad
Hyderabad, TS, IND
On-site
Posted April 13, 2026
Job Description
Amazon's Global Procurement Organization manages indirect supply chain for World Wide Operations and Delivery Services. We work with massive amounts of data and leverage them for advanced applications.
We are Data Engineers to build the facilities to help enable the Global Procurement Organization to gain deep insights into all facets of indirectly supply management and apply advanced data science for automation and decision making.
You are someone who is customer focused, relentless and driven to help us build out a brand-new initiative. You will collaborate closely with product managers, developers and leaders across the org. To be successful in this role, you should have broad skills in database design, be comfortable dealing with large and complex data sets, modeling and designing a strong data warehouse and building self-service data platforms for stakeholders to utilize the data we manage.
Key job responsibilities
* Build and maintain scalable ETL/ELT data pipelines that ingest, transform, and serve procurement data across multiple systems
* Support existing data infrastructure including quality monitoring, alerting, and SLA tracking for critical business data feeds
* Implement data models and table schemas following established patterns across our data warehouse (Amazon Redshift / AWS Glue catalog)
* Work with business stakeholders and analysts to understand data requirements and deliver reliable, well-documented data products
* Contribute to data quality frameworks — write validation logic, data contracts, and anomaly detection jobs
* Participate in on-call rotations and troubleshoot pipeline failures affecting downstream procurement reporting
* Learn and apply data engineering best practices: idempotency, backfill strategies, schema evolution, and partitioning
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
- Knowledge of AWS Infrastructure
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
We are Data Engineers to build the facilities to help enable the Global Procurement Organization to gain deep insights into all facets of indirectly supply management and apply advanced data science for automation and decision making.
You are someone who is customer focused, relentless and driven to help us build out a brand-new initiative. You will collaborate closely with product managers, developers and leaders across the org. To be successful in this role, you should have broad skills in database design, be comfortable dealing with large and complex data sets, modeling and designing a strong data warehouse and building self-service data platforms for stakeholders to utilize the data we manage.
Key job responsibilities
* Build and maintain scalable ETL/ELT data pipelines that ingest, transform, and serve procurement data across multiple systems
* Support existing data infrastructure including quality monitoring, alerting, and SLA tracking for critical business data feeds
* Implement data models and table schemas following established patterns across our data warehouse (Amazon Redshift / AWS Glue catalog)
* Work with business stakeholders and analysts to understand data requirements and deliver reliable, well-documented data products
* Contribute to data quality frameworks — write validation logic, data contracts, and anomaly detection jobs
* Participate in on-call rotations and troubleshoot pipeline failures affecting downstream procurement reporting
* Learn and apply data engineering best practices: idempotency, backfill strategies, schema evolution, and partitioning
Basic Qualifications
- 1+ years of data engineering experience- Experience with data modeling, warehousing and building ETL pipelines
- Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
- Experience with one or more scripting language (e.g., Python, KornShell)
Preferred Qualifications
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR- Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
- Knowledge of AWS Infrastructure
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
pythonawsaidataproductdesign
Similar Jobs
ADCI - Tamil Nadu - A83
Manager III, Business Intel
Lead / ManagerChennai, TN, IND
Spring Health
Associate Director, Product Data Science
Lead / ManagerRemote
Wells Fargo
Data Science Senior Manager
Senior2 Locations
Lloyds Banking Group
Group Head of Data Product Strategy & Governance
Lead / Manager3 Locations
Riot Games
Sr. Manager, Data Engineering - Central Product Insights
Lead / ManagerLos Angeles, USA