Back to Search
Overview
Mid-Level

AI Platform Data Engineer, Ring Decisions Sciences Platform

Confirmed live in the last 24 hours

ADCI - Karnataka

ADCI - Karnataka

Bengaluru, KA, IND
On-site
Posted November 14, 2025

Job Description

Ring is seeking an AI-first Platform Data Engineer who embraces a prompt driven development philosophy with strong technical, analytical, communication, and stakeholder management skills to join our team. This role sits at the intersection of data engineering, business intelligence, and platform engineering—requiring an ability to partner with software development engineers, scientists, data analysts, and business stakeholders across various verticals. You will design, evangelize, and implement platform features and curated datasets that power AI/ML initiatives and self-service analytics. All helping us provide a great neighbor experience at greater velocity.

You will work in a complex data environment where you will support various use cases including self-service business reporting, production data pipelines, machine learning feature datasets, and datasets built for AI Agents. This role requires a first-principles approach to leveraging AI at every layer of the data stack—from using AI agents to write and optimize code, to building AI-powered platforms that serve AI models, to deploying intelligent agents that make data accessible. You will use AI to build AI infrastructure, automate the automation, and create self-improving systems that continuously enhance data quality, discoverability, and usability. Experience with AI-powered development tools, agentic workflows, prompt engineering, ML feature engineering, automated testing frameworks, self-service analytics platforms, and intelligent data discovery tools is mandatory.


Key job responsibilities
This role will be responsible for building and maintaining efficient, scalable, and privacy/security-compliant data pipelines, curated datasets for AI/ML consumption, and AI-native self-service data platforms using an AI-first development methodology. You will act as a trusted technical partner to business stakeholders and data science teams, deeply understanding their needs and delivering well-modeled, easily discoverable data optimized for their specific use cases. You are expected to default to AI-powered solutions, leverage agentic frameworks, and build systems that continuously learn and improve through AI—accelerating development velocity, improving data quality, and enabling stakeholder independence through intelligent automation.

Basic qualifications

* 3+ years of data engineering experience with demonstrated stakeholder management and communication skills
* Experience with data modeling, warehousing and building ETL pipelines for both analytics and ML use cases
* Experience with SQL and at least one programming language (Python, Java, Scala, or NodeJS)
* Experience building datasets or features for machine learning models or self-service analytics
* Extensive hands-on experience with Gen AI enhanced development pipelines, AI coding assistants (GitHub Copilot, Amazon Q, Cursor, etc.), and prompt engineering
* Demonstrated track record of building AI agents, agentic workflows, or AI-powered automation tools
* Demonstrated ability to build tools, frameworks, or platforms that enable others

Preferred qualifications

* Experience with AWS technologies like Bedrock, SageMaker, Redshift, S3, AWS Glue, EMR, Athena, Kinesis, FireHose, Lambda, Step Functions, SageMaker Feature Store, and IAM roles and permissions
* Experience building multi-agent systems, LangChain/LangGraph applications, or custom AI agent frameworks
* Experience with prompt engineering, RAG (Retrieval Augmented Generation) systems, and LLM fine-tuning
* Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS, with production-quality code standards
* Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, vector databases, column-family databases)
* Experience with BI tools (QuickSight, Tableau, Looker) and designing datasets for analytical consumption
* Experience building or contributing to AI-native self-service data platforms, feature stores, or intelligent data cataloging systems
* Experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, LangChain) and feature engineering best practices
* Experience with orchestration tools (Airflow, Step Functions, MWAA) and AI-powered workflow automation
* Experience with infrastructure-as-code (CDK, Terraform, CloudFormation) and AI-assisted infrastructure management
* Experience with AI-powered monitoring, observability, and anomaly detection platforms
* Experience with API development, microservices architecture, and AI-enhanced API generation
* Experience with semantic search, vector databases, and knowledge graph technologies
* Experience facilitating technical workshops, training sessions, or serving in customer-facing technical roles
* Knowledge of CI/CD practices for data pipelines, ML
nodepythonjavagorustawsmachine learningaidataanalytics