Back to Search
Overview
Mid-Level

Data Engineering

Confirmed live in the last 24 hours

Unisys

Unisys

Bangalore, KA, India
On-site
Posted April 29, 2026

Job Description

What success looks like in this role:

End-to-End Pipeline Engineering: Build and automate robust ETL/ELT pipelines using Azure Data Factory (ADF), AWS Glue, and Apache Airflow.

· Distributed Computing: Develop large-scale data processing jobs using PySpark and Scala within Databricks or EMR environments.

· Streaming & Real-time Integration: Design and implement real-time data ingestion and processing layers using Apache Kafka, Confluent, or AWS Kinesis.

· Data Lakehouse : Manage and optimize cloud storage using ADLS Gen2 and S3, implementing ACID transactions with Delta Lake or Apache Iceberg.

· Advanced Data Modeling: Design highly performant schemas for cloud data warehouses like Snowflake, Amazon Redshift, or Google BigQuery.

· Data Transformation & Quality: Use dbt (data build tool) for modeling and implement automated quality checks using Great Expectations or Soda.

· Infrastructure & CI/CD: Deploy and manage data infrastructure using Terraform or CloudFormation, and maintain CI/CD pipelines via GitHub Actions or GitLab CI.

Technical Stack Requirements

· Cloud Platforms: Deep hands-on experience with Microsoft Azure (ADF, Synapse, Databricks) and AWS (S3, Glue, Athena, Lambda).

· Programming: Strong proficiency in Python (PySpark, FastAPI), SQL, and familiarity with Java or Scala.

· Big Data Tools: Experience with Apache Spark, Apache Flink, and Hadoop ecosystem.

· Databases: Strong knowledge of both Relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, or DynamoDB) databases.

· Containerization: Proficiency with Docker and Kubernetes (K8s) for deploying data services.

· Observability: Familiarity with monitoring tools like Prometheus, Grafana, or Datadog to track pipeline health.

#LI-BN1

You will be successful in this role if you have:

· Experience: 4–6 years of professional experience in data engineering, backend engineering, or a related field.

· Education: Bachelor’s  Engineering, 

· Methodology: Strong understanding of Agile methodologies and the ability to work in a fast-paced, iterative environment.

· Soft Skills: Excellent problem-solving skills and the ability to explain complex technical concepts to non-technical stakeholders.

Preferred Certifications

· Azure Data Engineer Associate (DP-203).

· AWS Certified Data Engineer – Associate.

· Databricks Certified Professional Data Engineer.

Unisys is proud to be an equal opportunity employer that considers all qualified applicants without regard to age, blood type, caste, citizenship, color, disability, family medical history, family status, ethnicity, gender, gender expression, gender identity, genetic information, marital status, national origin, parental status, pregnancy, race, religion, sex, sexual orientation, transgender status, veteran status or any other category protected by law.

Local employment practices and rights may vary by jurisdiction and are subject to applicable local laws. This commitment includes our efforts to provide for all those who seek to express interest in employment the opportunity to participate without barriers.

 

If you are a US job seeker unable to review the job opportunities herein, or cannot otherwise complete your expression of interest, without additional assistance and would like to discuss a request for reasonable accommodation, please contact our Global Recruiting organization at GlobalRecruiting@unisys.com. US job seekers can find more information about Unisys’ EEO commitment here.

data