Back to Search
Overview
Senior

Senior Consultant - Consumer Analytics Data Engineering

Confirmed live in the last 24 hours

Eli Lilly

Eli Lilly

India, Bengaluru
On-site
Posted April 15, 2026

Job Description

At Lilly, we unite caring with discovery to make life better for people around the world. We are a global healthcare leader headquartered in Indianapolis, Indiana. Our employees around the world work to discover and bring life-changing medicines to those who need them, improve the understanding and management of disease, and give back to our communities through philanthropy and volunteerism. We give our best effort to our work, and we put people first. We’re looking for people who are determined to make life better for people around the world.

Eli Lilly Services India Pvt Ltd

Senior Consultant, Consumer Analytics Data Engineering

As Eli Lilly strives to achieve its purpose of making life better for patients, we have been building up our in-house ‘Consumer Experience’ function, which designs and executes the next-generation marketing campaigns aimed at informing and educating consumers (or patients) directly.

We are seeking a highly skilled Senior Consultant to lead consumer analytics data engineering initiatives at Lilly, Bengaluru. This role is primarily focused on the Databricks platform and AWS ecosystem, designing and maintaining robust data pipelines, lakehouse architectures, and semantic layers that power advanced analytical solutions and AI/agentic workflows for consumer insights. The ideal candidate brings deep hands-on Databricks expertise, good AWS data engineering skills, and a working knowledge of semantic layer design and AI agent development. The role will be a blend of technical expertise along with business/domain integration.

Job Responsibilities:

  • Design, develop, and implement scalable ETL/ELT pipelines for extracting, transforming, and loading consumer data from various sources (e.g., CRM, marketing platforms, DCM, GA4, digital channels), with Databricks/AWS as the primary execution platform.
  • Architect and manage end-to-end solutions on Databricks including Unity Catalog, Delta Live Tables, Databricks Workflows, Databricks SQL, etc.; own platform governance covering schemas, permissions, and data lineage.
  • Design multi-hop lakehouse architectures (Bronze / Silver / Gold) using Delta Lake; optimize Spark compute, cluster configurations, and Auto Loader for performance and cost efficiency.
  • Leverage AWS data services — S3, Glue, Lambda and Redshift — in conjunction with Databricks to build reliable, end-to-end consumer data flows.
  • Architect and optimize data models and schemas to support complex analytical queries and reporting requirements related to consumer behaviour, preferences, and engagement.
  • Design and publish semantic layers (metrics definitions, certified datasets, business logic) consumed by downstream BI tools and AI agents; build and deploy agentic workflows using Databricks AI Functions or similar frameworks. (Preferred)
  • Ensure data quality, integrity, and governance across all consumer data assets by implementing validation rules, schema evolution controls, and monitoring processes through Unity Catalog.
  • Collaborate with data scientists, business analysts, and marketing teams to understand data needs and translate them into technical data engineering solutions; partner to productionize ML models and feature stores on Databricks.
  • Implement automation for data ingestion, processing, and delivery with a focus on efficiency, reliability, and SLA adherence.
  • Troubleshoot and resolve data-related issues, performing root cause analysis and implementing corrective actions.
  • Mentor other data engineers and contribute to engineering standards, code reviews, knowledge sharing, and comprehensive documentation for data pipelines, models, and technical processes.
  • Stay current with Databricks platform updates, AWS data services, and emerging best practices in data engineering and AI-driven analytics.

Job Qualifications:

  • Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related quantitative field.
  • 8+ years of data engineering experience with hands-on production experience on Databricks.
  • Strong proficiency in Python and PySpark; advanced SQL for complex analytical workloads.
  • Familiarity with Software Development Life Cycle (SDLC) practices — including version control (Git), CI/CD pipelines, code reviews, and agile development methodologies. (Preferred)
  • Deep knowledge of Databricks platform architecture — Unity Catalog, Delta Lake, Databricks Workflows, Databricks SQL, and cluster/compute management.
  • Solid experience with AWS data stack: S3, Glue, Redshift, Lambda, and IAM.
  • Experience designing lakehouse architectures (medallion/multi-hop patterns) at scale and with ETL/ELT orchestration tools.
  • Strong understanding of data warehousing concepts, dimensional modeling, and data governance principles.
  • Hands-on experience building semantic layers (e.g., dbt metrics, Databricks AI/BI semantic layer) and creating AI agents or agentic pipelines (Databricks AI Functions etc.) is preferred.
  • Excellent problem-solving, communication, and stakeholder collaboration skills.

Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form (https://careers.lilly.com/us/en/workplace-accommodation) for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response.

Lilly does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status.

#WeAreLilly
dataanalytics