Evaluating in eLearning – How do I Constantly Evaluate my eLearning Solutions
One crucial part of the process for educators and instructional designers is evaluating in eLearning. Not only, we should evaluate or assess our learners to verify if they are gaining a new knowledge, but also, it is important that we evaluate the design before implementation to make sure it will be a successful project.
Also, it is important to evaluate after implementation to measure long-term results and determine if the learning solution was actually effective.
In this post, I will discuss the importance of constantly evaluating your work and how I use some simple tools to evaluate before and after implementing the eLearning solution.
Types of Evaluation
There are different ways to evaluate in eLearning. We need to have a clear understanding of what we want to measure and when. These are the main evaluation processes I follow in my eLearning projects:
- Evaluating the design: In eLearning, as in any traditional learning solution, before implementing your solution you need to evaluate your eLearning product to determine if the design was good. This can be the first evaluation process you will have to do to ensure your solution will help learners achieve the learning objectives.
- Evaluating learners understanding: Additionally, within the eLearning module or as part of the training, you might include certain evaluation tools to assess if learners are actually learning and understanding the topics covered in the course. These evaluation tools can be quizzes, knowledge checks, interactions or any other form of assessment that can be included within the eLearning module.
- Evaluating the effectiveness of the eLearning solution: And finally, a different type of evaluation is the final review of the learning solution. This type of evaluation is done after the implementation of the learning solution, and the idea with this event is to measure results and answer the question of “how effective was the learning solution?”
Evaluating the Design – How good was the learning design?
After I finish developing an eLearning module I am always wondering if the final product represents a good design.
So far project stakeholders are really happy with what I have created but I still want to measure with a checklist or some sort of evaluation tool if the design was good. Luckily, I found what I consider a “wonderful tool” (like everything Cathy Moore creates :D) called “Checklist for a strong learning design” (Moore, 2015).
Checklist for Strong Learning Design
|No.||Action-oriented materials||Spectrum||Information dump|
|1||The goal of the project is to change performance in a visible, measurable way.||The goal of the project is to transfer information into people’s brains.|
|2||Objectives used to design the materials describe visible, on-the-job behaviors that are necessary to reach the project goal (“sell”, “lead”, “encrypt”, “schedule”, “design”).||Objectives describe knowledge (“understand”). If behaviors are described, they are behaviors that happen during a test (“identify”, “explain”, “define”).|
|3||The format of the materials (webinar, PDF, etc.) is determined by the type of activities and users’ needs.||The format of the materials is determined by tradition, the LMS, or what’s most convenient for the client.|
|4||The materials feel like one immersive, challenging activity or a series of activities with little interruption.||The materials feel like a presentation that’s occasionally interrupted by a quiz.|
|5||The authors appear to respect the learners’ intelligence and previous experience.||The authors appear to doubt the learners’ ability to draw conclusions and assume they have no experience.|
|6||Activities make people practice making decisions like the ones they make on the job.||Activities are quizzes, trivia games, or other knowledge checks that don’t happen on the job.|
|7||Activity feedback shows people what happens as a result of their choice; they draw conclusions from the result.||Activity feedback explicitly tells people “correct” or “incorrect”; they aren’t allowed to draw conclusions.|
|8||People can prove that they already know material and skip it.||Everyone is required to view every bit of information regardless of their existing knowledge or performance on activities.|
|9||Reference information is supplied outside the activity in job aids; people practice using the job aids in activities.||Reference information is delivered through the course or training; people are expected to memorize it or come back to the course for review.|
|10||Characters are believable; they face complex, realistic challenges with emotionally compelling consequences.||Characters seem fake (e.g. preachy or clueless); their challenges are minor and are presented as intellectual exercises.|
|11||Visuals are used to convey meaning.||Visuals are used as “spice”.|
|12||Photos of people show humans with realistic expressions. Illustrations appear intended for grownups.||Visuals of people are stock photo models who are over-acting or childish cartoons.|
|13||In eLearning, audio narration is used only for:
> Dramatic realism (e.g. characters’ voices in a scenario).
> Explanations of complex or rapidly-changing graphics.
> Motivational messages and explanations from people who really exist (e.g. CEO, subject matter expert).
|Audio narration is used to:
> Deliver information while displaying simple, static screens.
> Redundantly read text on the screen.
> Lecture people about what they should or shouldn’t do.
|14||The writing is concise, uses contractions, and sounds like a magazine (Flesch Reading Ease score of 50 or higher in Word).||The writing is wordy and stiff; it sounds like a textbook or insurance policy (Flesch Reading Ease score of 49 or lower in Word).|
|TOTAL (Out of 70)||0|
I took her tool and I wrote it in excel and I added a point system in the spectrum section to get at the end a score, where the maximum possible score is 70.
With this checklist I can go through different aspects of the course design, give a score and then determine if the design is more of an information dump or an action-oriented course.
The idea is to build better eLearning products that are more in the spectrum of action-oriented courses. Of course, I have to admit that my first eLearning creations were unfortunately in the “information dump” side but my latest projects are getting higher ranks, which means they are more action-based type of training, and this is my main goal when creating eLearning.
Remember, we don’t want our adult learners to get bored, we want them to be active participants in their learning process and get a really good value out of the instruction. They want to know what to do with the new knowledge, straight to the point 🙂
This evaluating exercise made me think on new design strategies I should consider in my next projects and how I can build better and stronger courses to achieve the maximum of 70 points.
Thanks Cathy Moore for all the work you do and for making the work of instructional designers a lot easier!
Evaluating Learners Understanding – Quizzes can still be fun
To measure learners’ understanding of the new knowledge, I prefer to have knowledge checks throughout the eLearning module rather than having a final quiz at the end of the course.
The reason for this, is that it provides an opportunity for learners to practice the new knowledge straight away by sections or modules within the module. A long quiz at the end can be overwhelming and stressful. We want the evaluation process to be fun and engaging so learners can get the most out of it.
Other ways to evaluate are with scenarios in different points of the eLearning module. These scenarios must be meaningful and related to a real world situation or a workplace simulation. In these scenarios, we want to challenge our learners to learn from their mistakes, common assumptions and wrong practices. This is also the best place to give them corrective and immediate feedback so they can learn.
Evaluating the Effectiveness of the eLearning Solution – Using Kirkpatrick’s Model
Sometime after implementation, let’ say 3 or 6 months after the eLearning was rolled out, it is important to evaluate the effectiveness of the module.
An organised and practical approach to do program evaluations in the corporate sector is to use Kirkpatrick’s levels of evaluation (Simonson, 2008).
In the table below, I provide a summary of Kirkpatrick’s evaluation model. This tool helps me in understanding the instruments that can be used to gather data on each level. This data will be crucial to measure effectiveness of the training.
Level of Evaluation
Instruments to Gather Data
|Level 1 – Reactions or Learner Satisfaction
(How users perceive the module, what did they like or not about the training? This can be measured with a course survey)
|Survey with questions to identify what people like and didn’t like about the training module, questions to determine how learners perceived the training.|
|Level 2 – Learning
(Have learners advanced in skills, knowledge or attitude? This can be measured with case scenarios, quizzes, completion rate, post-test scenario with questions in a refresher type of training, check if learning objectives defined during the consulting process were achieved)
|Level 3 – Transfer or Application of Knowledge
(Are learners using the knowledge in the workplace?)
|Level 4 – Results / Impact of Training on the Organisation
(This is the direct and indirect impact of training on the success of the organisation. This can be measured with the business KPIs, comparing results of pre-test and post-test. Impact most be measurable)
|Level 5 – Usability, Technical issues, Accessibility
(This can be measured with the course survey)
|Survey about content, layout, interactivity, presentation, length of course, resources, etc.|
|Level 6 – Return on Investment (ROI)||This is measured collecting data from level 4.
Can we convert result of training into monetary value? How much was the saving, or process improvement translated into a monetary value, etc.
Determine cost of training (developing time/cost, time users spent doing training).
Calculate ROI by comparing the monetary benefits to the costs.
Example of Survey to Measure the Level 1 (Reactions or Learner Satisfaction)
Some questions you can include in your survey to measure reactions and learner satisfaction are:
- What did you enjoy most about the course?
- Did you find any errors or mistakes? Please provide more details.
- What did you enjoy least about the course?
- What activities, resources or information would you like to add or remove from this course?
- How would you rate the course materials?
- How would you rate the course content?
- How would you rate the activities in the course?
- Overall, I liked the way that the course/program looked and worked.
- The content is clearly explained.
- The activities and presentation of the content were engaging.
Some of the questions above should be implemented with a grading scale such as, from “Excellent to Poor”, or “Strongly Agree to Strongly Disagree”.
Formative Evaluation vs. Summative Evaluation
Another way to see evaluation procedures is by making a distinction between formative and summative evaluations. The main difference of these evaluations are:
|Provides feedback for course improvement.||Considers overall impact and usefulness of the course.|
|Provides preliminary answers to short-term outcomes.||Usually at the end of the program but can be ongoing.|
|Are ongoing throughout the training.||To make an evaluative judgement of the value of the entire program.|
|Making decisions to improve the program.||Examine the outcomes and consequences of the program.|
I think the main purpose of evaluating is to collect data about how our eLearning solutions are doing to then implement corrective actions. We should engage in continuous improvement activities constantly, for our learners and for our own satisfaction.
To do this, we can introduce simple instruments such as surveys, grading tables, quizzes, questionnaires and any other tool to gather data.
The idea is to determine if the eLearning design is strong, if learners achieved the learning objectives and if the overall impact of the eLearning solution was as expected.
Once you collect all your data, analyse it and make improvements to your projects. And always, learn from any experience 😀
What tools would you use to evaluate eLearning? Feel free to leave your comments below.
See you next time,
Moore, C. (2011). Checklist for strong learning design. Retrieved from http://blog.cathy-moore.com/2011/07/checklist-for-strong-elearning/
Simonson, M. (2007). Evaluation and distance education, five steps. The quarterly review of distance education, Vol 8(3). Retrieved from Kirkpatrick – Evaluation and Distance Education.