Back to Search
Overview
Mid-Level

Enterprise Data & Generative AI Engineer

Confirmed live in the last 24 hours

Riverside Natural Foods

Riverside Natural Foods

Compensation

$120,000 - $145,000/year

LinkedIn
Hybrid
Posted April 1, 2026

Job Description

Join Riverside Natural Foods Ltd., a $300 million+ Canadian-based, family-owned, and globally operating business, committed to leaving the world better than we found it. As a B-Corp certified, Triple-Bottom Line company, we proudly manufacture nutritious, 'better-for-you' snacks such as MadeGood and GOOD TO GO. We value teamwork, humility, respect, ownership, adaptability, grit, and fun.

We’re on an ambitious mission to double our business by 2027, and we need talented individuals like you to help us reach new heights. At Riverside, you’ll have the opportunity to chart your own path to success while contributing to ours. We believe anything worth doing is worth doing right, and our values will guide us through the rugged terrain – and yes, it will get rough. But that’s what makes the journey worthwhile.

So, lace up your boots and let’s tackle the climb together.

You can learn more about us at www.riversidenaturalfoods.com.


Position Summary:

As Riverside Natural Foods continues its business transformation journey, we are investing in leading-edge data and analytics capabilities to support long-term, values-based growth. A key pillar of this transformation is the evolution of a trusted data ecosystem as a foundation for our Business Intelligence and AI strategy.

The Enterprise Data & Generative AI Engineer plays a central role in building and operating the data foundation that powers analytics, machine learning, and Generative AI across the organization. This role spans cloud lakehouse platforms (Databricks or Snowflake), syndicated commercial data (e.g., POS, Nielsen), IoT/PLC data from production lines, unstructured data sources, and SAP Datasphere. The engineer ensures that all enterprise data domains are integrated into a governed, scalable, AI‑ready Data Fabric that supports advanced analytics and GenAI applications.

This individual must be a self-starter with strong communication skills, a positive outlook, curiosity, and a deep understanding of SAP-centric enterprise data architecture with other modern lakehouse data ecosystems.

Primary Responsibilities:

Data Integration & Pipeline Engineering

  • Support the execution of Riverside’s BI and AI Strategy in alignment with enterprise priorities.
  • Design and implement scalable ingestion pipelines across different application platforms, including POS feeds, Nielsen syndicated data, IoT/PLC data, and unstructured sources such as documents, logs, and images.
  • Support the reliable delivery of current reports consumed by the business and their transition to better designed technology.
  • Optimize pipelines for performance, cost, and reliability across the Data Fabric.

6-12 months Horizon:

  • Build ELT/ETL workflows that support analytics, ML, and GenAI use cases across structured, semi‑structured, and unstructured data based on business priorities.
  • Develop real‑time or near‑real‑time data flows for AI‑driven applications using event‑driven architectures.

Enterprise Data Architecture

  • Model and harmonize SAP S/4HANA data structures while integrating them with external commercial, operational, and sensor data in collaboration with the SAP Analytics Lead.
  • Integrate SAP and non‑SAP data into Databricks or Snowflake to support advanced analytics, ML, and GenAI workloads.
  • Contribute to the design of a unified Data Fabric that supports cross‑domain analytics and AI.

Data Governance, Quality & Observability

  • Implement data quality rules, lineage tracking, and metadata management across SAP, cloud, IoT, and syndicated data sources.
  • Ensure compliance with security, privacy, and regulatory requirements.
  • Monitor data drift, embedding drift, and AI‑specific data quality indicators.

Platform Engineering & Automation

  • Use infrastructure‑as‑code and CI/CD to deploy and manage data pipelines and lakehouse components.
  • Automate documentation, testing, and pipeline optimization using GenAI‑assisted tools.
  • Contribute to the design of enterprise data products that are versioned, governed, and AI‑ready.

AI/ML & Generative AI Enablement (emerging area)

&l
pythongorustawsgcpazuremachine learningaiiosdata