Evaluating your e-Learning Design

Evaluation

After I finish developing an eLearning module I am always wondering if the final product represents a good design. So far project stakeholders are really happy with what I have created but I wanted to measure with a checklist or some sort of evaluation tool if the design was good, and I found what I consider a “wonderful tool” (like everything Cathy Moore creates 🙂 ) called “Checklist for a strong learning design” (Moore, 2015).

I took her tool and I wrote it in excel and I added a point system in the spectrum section to get at the end a score, where the maximum possible score is 70.

Checklist for strong learning design
Module:
No. Action-oriented materials Spectrum Information dump
5 4 3 2 1
1 The goal of the project is to change performance in a visible, measurable way. The goal of the project is to transfer information into people’s brains.
2 Objectives used to design the materials describe visible, on-the-job behaviours that are necessary to reach the project goal (“sell”, “lead”, “encrypt”, “schedule”, “design”). Objectives describe knowledge (“understand”). If behaviours are described, they are behaviours that happen during a test (“identify”, “explain”, “define”).
3 The format of the materials (webinar, PDF, etc.) is determined by the type of activities and users’ needs. The format of the materials is determined by tradition, the LMS, or what’s most convenient for the client.
4 The materials feel like one immersive, challenging activity or a series of activities with little interruption. The materials feel like a presentation that’s occasionally interrupted by a quiz.
5 The authors appear to respect the learners’ intelligence and previous experience. The authors appear to doubt the learners’ ability to draw conclusions and assume they have no experience.
6 Activities make people practice making decisions like the ones they make on the job. Activities are quizzes, trivia games, or other knowledge checks that don’t happen on the job.
7 Activity feedback shows people what happens as a result of their choice; they draw conclusions from the result. Activity feedback explicitly tells people “correct” or “incorrect”; they aren’t allowed to draw conclusions.
8 People can prove that they already know material and skip it. Everyone is required to view every bit of information regardless of their existing knowledge or performance on activities.
9 Reference information is supplied outside the activity in job aids; people practice using the job aids in activities. Reference information is delivered through the course or training; people are expected to memorize it or come back to the course for review.
10 Characters are believable; they face complex, realistic challenges with emotionally compelling consequences. Characters seem fake (e.g.’ preachy or clueless); their challenges are minor and are presented as intellectual exercises.
11 Visuals are used to convey meaning. Visuals are used as “spice”.
12 Photos of people show humans with realistic expressions. Illustrations appear intended for grownups. Visuals of people are stock photo models who are over-acting or childish cartoons.
13 In eLearning, audio narration is used only for:
> Dramatic realism (e.g. characters’ voices in a scenario).
> Explanations of complex or rapidly-changing graphics.
> Motivational messages and explanations from people who really exist (e.g. CEO, subject matter expert).
Audio narration is used to:
> Deliver information while displaying simple, static screens.
> Redundantly read text on the screen.
> Lecture people about what they should or shouldn’t do.
14 The writing is concise, uses contractions, and sounds like a magazine (Flesch Reading Ease score of 50 or higher in Word). The writing is wordy and stiff; it sounds like a textbook or insurance policy (Flesch Reading Ease score of 49 or lower in Word).
SCORE 0 0 0 0 0
TOTAL (Out of 70) 0

With this checklist I can go through different aspects of the course design, give a score and then determine if the design is more an information dump or an action-oriented course. The idea is to build better eLearning products that are more in the spectrum of action-oriented courses. Of course, I am still learning and I think my first products unfortunately are in the “information dump” side but one of the last project I created got a score of 54 out 70 which is not too bad.

This evaluating exercise made me think on new design strategies I should consider in my next projects and how I can build better and stronger courses to achieve the maximum of 70 points.

Thanks Cathy Moore for all the work you do and for making the work of instructional designers a lot easier 😀

Reference: Moore, C. (2011). Checklist for strong learning design. Retrieved from http://blog.cathy-moore.com/2011/07/checklist-for-strong-elearning/

My excel version of the checklist here

Evaluation and Distance Education

Evaluation

After developing and implementing several eLearning solutions in my workplace we are now at a point where we want to take some time to analyse the effectiveness of the training. To do this, we decided to develop a progress report to the stakeholders to gather the results from the course survey and insights on course participation, engagement, issues found, and next steps to conduct the training to a completion or closure process.

Remember that the idea for some of the courses is to have a refresher module later on, or to get to a closure so we can measure if the learning solution was effective and allowed us to achieve the learning objectives identified at the beginning of the consultation process with the stakeholders.

This evaluation phase is relevant because it will bring critical information about how learners perceived the learning tool and how we can improve future projects. From this evaluation, we can get valuable information that will create the design criteria for the next eLearning solutions. So it is important to have an organised approach to collate the data and get the best possible information for our own learning as instructional designers.

An organised and practical approach to do program evaluations in the corporate sector is to use Kirkpatrick’s levels of evaluation (Simonson, 2008). I have prepared the table below to summarise Kirkpatrick’s evaluation model and I have added levels 5 and 6 to adapt it more to my work. This tool helps me in understanding the instruments that can be used to gather data on each level.

 

 

Training Evaluation
 Level of Evaluation  Instruments to Gather Data
 Level 1 – Reactions or Learner Satisfaction

(How users perceive the module, what did they like or not about the training? This can be measured with the course survey)

Survey with questions to identify what people like and didn’t like about the training module, questions to determine how learners perceived the training.Examples:

Rating type of questions (Strongly disagree, disagree, neutral, agree, strongly agree):

  • The information in the module is relevant.
  • The presentation of the module was engaging.
  • Overall, I liked the module (content, layout, interactivity, presentation).

How likely are you to use the information of the course?

How likely are you to share the information of the course?

What did you enjoy most and least about the module?

What would you change (add, remove, improve) in the module?

Would you like to add any other comment?

 Level 2 – Learning

(Have learners advanced in skills, knowledge or attitude? This can be measured with case scenarios, quizzes, completion rate, post-test scenario with questions in a refresher type of training, check if learning objectives defined during consulting process were achieved)

  • Case scenarios with multiple-choice questions.
  • Quizzes.
  • Pre-tests.
  • Post-tests.
  • Completion rate of the course.

Check if learning objectives were achieved.

Level 3 – Transfer or Application of Knowledge

(Are learners using the knowledge in the workplace? )

Are learners:

  • Using the policies?
  • Visiting the intranet?
  • Having discussions about the topic discussed in the training?
  • Following procedures, skills, knowledge and attitudes explained in the training?
Level 4 – Results / Impact of Training on the Organisation

(This is the direct and indirect impact of training on the success of the organisation. This can be measured with the business KPIs, comparing results of pre-test and post-test. Impact most be measurable)

  • Pre-tests.
  • Post-tests.
  • Measuring business KPIs and how the training helped achieve those KPIs.
Level 5 – Usability, Technical issues, Accessibility

(This can be measured with the course survey)

Survey about content, layout, interactivity, presentation, length of course, resources, etc.
 Level 6 – Return on Investment (ROI)
  1. This is measured collecting data from level 4.
  2. Can we convert result of training into monetary value? How much was the saving, or process improvement translated into a monetary value, etc.
  3. Determine cost of training (developing time/cost, time users spent doing training).
  4. Calculate ROI by comparing the monetary benefits to the costs.

 

 

Reference:

Simonson, M. (2007). Evaluation and distance education, five steps. The quarterly review of distance education, Vol 8(3). Retrieved from Kirkpatrick – Evaluation and Distance Education.