Back to Search
Overview
Lead / Manager

Senior Data Scientist

Confirmed live in the last 24 hours

Gore Mutual Insurance

Gore Mutual Insurance

Cambridge, Ontario, Canada; Toronto, Ontario, Canada
On-site
Posted April 10, 2026

Job Description

 

At Gore Mutual, we’ve always set ourselves apart as a modern mutual that does good. Now, we’re proudly building on that legacy to transform our company—and our industry—for the better.

Effective January 1, 2026, Gore has joined Beneva—the country’s largest mutual insurance company—as part of its Property & Casualty operations in Ontario and Western Canada. During 2026, Gore will combine its operations with Unica Insurance, Beneva’s Ontario-based subsidiary specializing in niche commercial and personal insurance, creating a stronger, more diversified mutual insurer with greater scale and long-term stability.

Every decision and investment remains anchored in long-term benefits to customers, members, and communities. Come join us.

Our Actuarial team is looking to onboard a Senior Data Scientist to help us drive value from data through the development of ML and AI algorithms, models and pipelines. This role will be responsible for driving significant business value generated from efficient reusable automated ML, AI algorithms, models, and pipelines 

What will you do?

Generate value through algorithm / model driven insight from data

  • Understanding of different algorithm types and their application (supervised classification, regression, unsupervised, reinforcement etc.)
  • Understanding of different modelling architectures, strengths and weaknesses (e.g., gradient boosting, clustering, SHAP, LLMs, autoencoders)
  • Experience in design and practical application of AI algorithms in a business setting, working with business users and ensuring that models are built suitably and applied
  • Understanding of different optimization techniques (linear optimization, integer optimization, dynamic programming etc)
  • Experience in design and practical application of optimization solutions in a business setting given certain constraint

Data ingestion, model deployment and validation

  • Pull data from various systems using SQL pyspark and other standard languages for relational and distributed databases and set up data ingestion pipelines to third party sources via API etc
  • Work with engineering partners to construct automated data pipelines for continuous delivery of data to models
  • Deploy machine learning models into production environments (e.g., implementing continuous integration and delivery (CI/CD) pipelines for automated model deployment, applying MLOps practices to maintain the lifecycle of machine learning models through testing and validation

Enhanced feature engineering of data

  • Work with Ops based systems for feature stores / automation of feature development (MLFlow, databricks, etc)
  • Understand feature transformation techniques to extract maximal value from data with respect to different algorithm type

Generate value through the integration of machine learning and AI algorithms into core Claims functions

  • Understand various c
pythongoawsazuremachine learningaidataproductdesign