Back to Search
Overview
Mid-Level

AI Engineer

Confirmed live in the last 24 hours

TEGNA

TEGNA

Tysons, Virginia, United States
On-site
Posted March 23, 2026

Job Description

About TEGNA

TEGNA Inc. (NYSE: TGNA) helps people thrive in their local communities by providing the trusted local news and services that matter most. With 64 television stations in 51 U.S. markets, TEGNA reaches more than 100 million people monthly across the web, mobile apps, streaming, and linear television. Together, we are building a sustainable future for local news.

 

We are seeking an AI Engineer to design, develop, and deploy scalable LLM-powered solutions leveraging AWS cloud services, Snowflake, and modern GenAI frameworks. This role focuses on building production-grade AI systems, optimizing LLM inference, and integrating enterprise data platforms with cutting-edge AI technologies.

The ideal candidate combines strong cloud engineering expertise with hands-on experience in prompt engineering, foundation models, agentic AI systems, and data pipelines within Snowflake and AWS ecosystems

Key Responsibilities

AI & Generative AI Development

  • Design, develop, and deploy LLM-powered applications and agentic AI systems in production environments.
  • Implement advanced prompt engineering strategies including:
  • Prompt chaining and multi-turn orchestration
  • Few-shot learning and in-context learning
  • Chain-of-Thought (CoT) and Tree-of-Thought (ToT) prompting
  • Function calling and tool use optimization
  • Structured output generation (JSON, XML schemas)
  • Build and optimize Retrieval-Augmented Generation (RAG) systems integrating Snowflake data with LLMs.
  • Evaluate and fine-tune foundation models via AWS Bedrock or other managed AI services.
  • Develop guardrails for AI systems including hallucination mitigation, grounding, and safety controls.
  • Implement LLMOps best practices for model lifecycle management:
  • Model versioning, deployment, and rollback strategies
  • Prompt versioning and experimentation frameworks
  • Monitor and observe LLM application performance using observability tools.
  • Evaluation frameworks for LLM outputs

Cloud & Platform Engineering (AWS)

Architect scalable AI solutions using AWS services such as:

  • Bedrock - Sagemaker – Access and fine-tune foundation models
  • Lambda – Serverless LLM application deployment
  • EC2 – GPU-accelerated inference and batch processing
  • Step Functions – Orchestrate complex LLM workflows and agentic pipelines
  • CloudWatch – Monitoring, logging, and alerting for AI systems

AI Application Development

  • Build APIs and backend services to operationalize AI solutions.
  • Integrate LLM/AI systems into internal applications, sales tools, or analytics platforms.
  • Implement streaming and real-time inference for low-latency AI applications.
  • Collaborate with stakeholders to translate use cases into production AI systems.

Required Qualifications

  • 5+ years of experience in AI/ML, Software or Data engineering.
  • Proficiency in Python with solid understanding of ML fundamentals
  • Strong hands-on experience with AWS, APIs and microservices architecture
  • Experience integrating AI solutions with data systems like Snowflake.
  • Practical experience with prompt engineering
  • Experience with LLM orchestration frameworks (e.g., LangChain, LlamaIndex, Semantic Kernel, or similar
  • Experience with agentic frameworks (AutoGen, CrewAI, or equivalent).

Preferred Qualifications

  • Experience building RAG pipelines in enterprise environments.
  • Knowledge of MLOps best practices.
  • Experience with vector databases and embeddings.
  • Familiarity with model evaluation frameworks (e.g., LLM eval metrics).
  • Experience implementing AI governance and responsible AI practices.
  • Background in sales, media, marketing analytics, or enterprise data platforms (a plus). #LI-MS1

Benefits: 

&
pythongorustawsaibackendmobiledataanalyticsproduct