Beyond Smile Sheets: My Top Two Tips for Evaluating Public Health E-Learning Solutions

Picture of white coffee mug in red box. Mug reads "Think creative, work effective."

As we wrap up 2023, it’s the perfect time to reflect on the effectiveness of our e-learning solutions and make plans for revising and designing new learning experiences in the new year. I help many clients develop evaluation plans (all year long!), so I’m sharing my top two tips for evaluation in today’s post.

Tip #1: Plan for evaluation at the beginning of the learning design process

“While we will be using data to inform our work, we will also need to design our learning experiences in ways that enable the effective and efficient capture of that data.” (Torrance, page 6)

Far too often in public health, e-learning, and instructional design, we find evaluation as an afterthought. It’s tacked on at the end for all kinds of reasons (e.g., lack of expertise, capacity, resources, etc.)

However, evaluation planning needs to happen at the beginning of the learning design process. Plans for data capture need to be woven into the overall design plan. So when you sit down with your team, partners, and/or consultants for needs analysis and kick-off planning, you want to be thinking about:

  • Your learning objectives (and make sure they are measurable!)

  • How and when you are going to measure the outcomes of interest

  • How to use the right evaluation technique for the right learning objectives (i.e., learners can’t demonstrate skills by completing a post-training satisfaction survey)

  • Your evaluation capacity to capture the outcomes of interest (e.g., time, funding, staff capacity, data capture that’s possible with your learning management system and survey technology, etc.)

  • The audience for your evaluation data and how the data will be used

Tip #2: Measure outcomes beyond “smile sheets”

When I started my certificate program in e-learning instructional design, one of the first phrases I learned (and laughed about!) was “smile sheets”. The term is used to describe the satisfaction surveys that learners often take immediately upon completing an online course or training. It’s easy to draw the wrong conclusions from these surveys, especially when their results are positive.

For example, I often see clients assume that their training or course has been effective in teaching on-the-job skills just because the post-course survey shows that learners liked the trainers and thought the training was interesting.

While that’s great news, it really only conveys learners’ immediate reactions, and not higher level outcomes like learning, behavior, and organizational results (*see Kirkpatrick’s four levels of training evaluation for a helpful framework).

While satisfaction is an important first step, we really want to know more about:

  • Learning (knowledge, demonstrated skills, confidence in applying what they learned when they return to the job, etc.)

  • Behavior (what knowledge and skills learners have successfully applied back on the job, what factors have enabled or challenged that application, etc.)

  • Results (what organizational-level indicators have been impacted by the application of training or course content, like productivity, patient satisfaction, reduction in costs, etc.)

  • Return on investment (was this worth your learners’ time?)

If you need help thinking through an evaluation plan for an e-learning experience, you are welcome to reach out and pick my brain or inquire about collaborating with me to develop an assessment and/or evaluation plan.

In the meantime, definitely check out these books which are some of the “go-tos” I grab from my bookshelf when I’m working on evaluation projects:

 

I’d love to hear from you!

  • Is your organization measuring outcomes beyond “smile sheets” for e-learning courses and trainings? Why or why not?

  • What are some of the biggest evaluation challenges that you encounter when trying to measure the effectiveness of your e-learning solutions?