HIH - Machine Learning Lead Analyst
Confirmed live in the last 24 hours
Cigna
Job Description
Position Overview
We’re looking for a Machine Learning Lead Analyst who will be responsible for accelerating our embedded AI solutions journey (UI/API & LLM/Agentic capabilities) across our platforms. This journey requires building foundational AI capabilities and reusable patterns, enabling teams through architecture decisions, establishing AI strategy & roadmap execution, and selecting cloud services wisely to deliver scalable outcomes. This role requires proven technology leadership with strong hands-on experience delivering modern AI/ML and GenAI solutions.
We are seeking a highly skilled and innovative Machine Learning Lead Analyst with 5–8 years of experience to join our team. This role is ideal for professionals who have a strong software engineering and ML foundation and have grown into GenAI, agentic systems, conversational AI, and LLMOps. You will be instrumental in designing, developing, deploying, and governing intelligent systems that leverage LLMs, tool-calling, orchestration frameworks, and Retrieval-Augmented Generation (RAG) to solve complex enterprise problems with responsible AI practices.
Responsibilities
- Understand a program’s architecture/design, logical and physical data models, and influence architecture decisions for AI-enabled platforms.
- Lead the design and implementation of agentic and conversational AI solutions, including tool-calling, orchestration patterns, and multi-step workflows.
- Define and deliver AI strategy and roadmap execution, aligning architecture patterns, platform capabilities, and delivery milestones with business needs.
- Design and develop embedded applications that include UI/API integration with LLM systems, tool integrations, and Python services to provide accurate recommendations and automation.
- Build and standardize RAG implementations (chunking strategies, embeddings, retrieval patterns, grounding, caching) for enterprise-grade knowledge experiences.
- Apply prompt engineering and prompt optimization techniques (templates, routing, structured outputs, guardrails) to improve quality and reliability.
- Utilize LLMs, open-source models, and ML/numerical frameworks to build transformative solutions, balancing performance, cost, and maintainability.
- Establish and maintain LLMOps practices including evaluation frameworks, test harnesses, regression datasets, offline/online metrics, and safety guardrails.
- Drive cost/latency/observability optimization across AI systems—instrumentation, tracing, token usage insights, performance tuning, and reliability patterns.
- Work with open-source ecosystems (TensorFlow, PyTorch, Hugging Face Transformers) and cloud solutions (AWS/Azure/GCP) to deliver fit-for-purpose designs.
- Partner with Product Development as a GenAI SME and architect, developing scalable, resilient, ethical AI solutions aligned with responsible AI principles.
- Develop and maintain AI pipelines as needed (data preprocessing, feature extraction, training workflows, evaluation), with strong focus on production readiness.
- Support product delivery partners in the successful build, test, and release of solutions, enabling adoption through reusable patterns and reference implementations.
- Operate independently with minimal oversight, proactively identifying gaps, risks, and opportunities to improve solution effectiveness and delivery outcomes.
- Examine and improve accuracy, effectiveness, reliability, and data quality practices supporting AI systems and downstream decisioning.
- Support the full software lifecycle of design, development, testing, deployment, and support for technical delivery.
- Understand the Business and the Application Architecture end-to-end, ensuring AI solutions integrate seamlessly with legacy and cloud-native systems.
- Participate in daily standup meetings, providing updates on backlog progress, blockers, risks, and technical decisions.
- Build solutions that align with responsible AI practices, including privacy, security, fairness, explainability, and compliance requirements.
Qualifications
Required Skills:
- Bachelor’s Degree in Information Technology, Computer Science, Engineering, or related coursework
- 5–8 years of experience building and deploying enterprise-grade AI/ML solutions with increasing responsibility in architecture and technical leadership
- Strong experience designing and deploying LLM-enabled systems for enterprise use cases (grounding, safety, evaluation, reliability)
- Hands-on experience building agentic systems capable of tool use, autonomous task execution, planning/routing, and multi-step workflows
- Experience implementing RAG architectures and enterprise knowledge grounding patterns
- Strong LLM prompt engineering skills including structured prompting, tool schemas, prompt routing, and optimization techniques
- Strong experience in Python for AI services, orchestration, and backend development
- Experience with API orchestration/tool-calling frameworks (e.g., OpenAI API, LangChain, CrewAI or similar patterns)
- Experience with ML libraries/frameworks such as TensorFlow, PyTorch, or Keras; and NLP tooling such as Hugging Face, SpaCy, NLTK
- Familiarity with cloud platforms and services such as AWS/Azure/GCP and deploying AI services at scale
- Strong understanding of evaluation and LLMOps: quality metrics, automated testing, guardrails, monitoring, rollback patterns
- Strong knowledge of data structures, algorithms, and software engineering principles
- Strong collaboration skills to work with architects, product owners, engineers, and business stakeholders
- Strong communication skills with the ability to convey complex AI/architecture concepts to diverse audiences
- Experience with observability practices: logging/tracing/metrics, latency profiling, token cost management, reliability monitoring
Required Experience & Education:
- Proven experience designing and developing large-scale enterprise application solutions, including AI-enabled platforms
- College degree (Bachelor) in related technical/business areas or equivalent work experience
- 5–8 years of experience in Software Engineering / ML Engineering / AI Engineering
- Hands-on experience engineering solutions in cloud environments and integrating LLM systems with enterprise applications
- Demonstrated ability to lead architecture decisions, mentor others, and drive adoption of reusable AI patterns
- Willing to learn, go above and beyond attitude; thrives in ambiguity and fast-moving delivery environments
- Experience with modern and legacy development technologies and integration approaches
Desired Experience:
- Healthcare experience is preferred
- Experience with production governance for responsible AI, compliance, and regulated environments is a plus
Location & Hours of Work
- Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate
- Primarily based in the Innovation Hub in Hyderabad, India in a hybrid working model (3 days WFO and 2 days WAH)
Equal Opportunity Statement
Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations.
About Evernorth Health Services
Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Similar Jobs
Cigna
Machine Learning Associate Advisor - HIH - Evernorth
Cigna
HIH - Machine Learning Analyst
Cigna
HIH - Machine Learning Lead Engineer
Cigna
Machine Learning Lead Analyst - HIH - Evernorth
Cigna