IC Blog: Evaluating Virtual Learning Programs

11 Aug 2021 6:36 AM | Anonymous member (Administrator)

by Lauren McAusland

The mass closures of schools, libraries and other public institutions in March 2020 prompted parents, teachers, students, and community members to search for resources to support their learning from home. Across the country, museums, art galleries, parks, zoos and historic sites jumped into action, creating innovative ways to connect with our audiences digitally and support distanced learning.

At Royal Botanical Gardens (RBG), we created a new virtual learning platform called RBG at Home. This platform sought to provide educational resources in four primary forms—blog posts articles, pre-recorded videos, activity worksheets, and live virtual learning sessions—to three audiences: teachers, caregivers, and life-long learners.


Before the pandemic, creating a virtual learning platform would have been a project that took months of development. However, given the urgency of the situation and a strong desire to support our audiences through the challenging transition into a provincial lockdown, our amazing team had the program up and running in a matter of weeks. In the early days of the program, our education department quickly published a lot of content that we felt would resonate with our understanding of what our audience would like. On the surface level, our offerings received a positive response, so over the next few months both in and out of the lockdowns, our education team continued producing content that we felt our audience would like. As we approached the one-year mark we began to ask ourselves some questions; Is this program doing what we want it to? Is it reaching who we want it to? What could we do to make this program even more successful? Is it worth continuing to run this program once on-site learning resumes?

Standing at this crossroad of ‘do we continue this program, and if we do, how can we make it better,’ we knew the time had come to do an evaluation. Evaluations give you a detailed understanding of how your program is operating in the real world so that you can make decisions on what you want to modify to improve the programs ability reach and engage your audience and meet program and institutional objectives. However, evaluations can be very daunting tasks. Having a staff member dedicated to evaluation is a luxury most institutions don’t have, leaving program coordinators and educators struggling to fit evaluation into their already busy schedules.

The good news: you don’t need to be a professionally trained evaluator in order to collect good data to support programming decisions! If you are thinking about doing an evaluation, but don’t know where to start, the most important thing you can do is sit down and think about your evaluation’s objectives and big picture questions before you start drafting a survey or begin any data collection. Understanding what you want to learn will help make sure you ask your participants the right ‘little’ questions to get good data to help you answer your ‘big’ questions

For example, one of our evaluation questions was ‘to what extent has there been engagement with French content on RBG at Home?’ By asking this, we wanted to learn if people had been using the French content to help us figure out if this is something we should make more or less of. To answer this question, we collected engagement metrics (likes, shares, views, etc.) of French content and we also asked participants to identify in a survey if they engage with content in French and/or English.

In total we identified eight of those ‘big’ questions we wanted our evaluation to answer. Perhaps the biggest one was: Are people interested in continuing to use virtual learning programs when on-site learning resumes? Our survey found that yes, 70% of our respondents indicated that they were very likely or likely to continue to use RBG at Home. Additionally, we found that our two audiences, ‘adults’ and ‘caregivers,’ indicated equal interest in continued engagement.

Every virtual learning program operates slightly differently, so the results of the RBG at Home evaluation might not apply to every similar site. For all institutions who have also put out online educational content and are contemplating if this is a strategy they should be continuing with, the time has come to examine our programs with a critical eye and assess how we can modify them to produce content that continued to engage virtual audiences during a return on to site programming.

Lauren McAusland is a recent graduate of the University of Toronto’s Masters of Museum Studies program and led the RBG at Home evaluation in her position as Interpretation Intern at Royal Botanical Gardens, Burlington, ON.


Interpretation Canada c/o Kerry Wood Nature Centre 6300 45th Ave Red Deer, AB, Canada  T4N 3M4

Powered by Wild Apricot Membership Software