Evaluating eLearning is often a one-and-done process, typically at the end of a learning experience. But factoring in data collected throughout the learning process can turn eLearning evaluation into an ongoing process integrated into every step, from planning and design, through the entire learner experience, making feedback and revision integral parts of the cycle.
In her comprehensive report on the emerging field of learning engineering, Ellen Wagner suggested looking at “continual enterprise feedback and revision” as a way to think about evaluation. “Learning and training outcomes are as much a part of formative inputs for the next round of learning experiences, as are materials, platforms, and conditions for performance evaluation, learner evaluation, instructor evaluation, and institutional evaluation,” she wrote.
Gathering and analyzing data that could provide feedback throughout the learning development and use cycle is likely to be a key part of a learning engineer’s role, according to Wagner’s report.
L&D wants rigorous evaluation
A 2018 eLearning Guild survey and report on learning evaluation found “a palpable desire for better, more rigorous learning evaluation,” according to the report’s author, Will Thalheimer. Nearly 85 percent of the L&D professionals who participated in the survey wanted to improve or substantially change the way their organizations measure and evaluate learning.
“Evaluation should not just be tacked on at the end; it should be baked into our design and development process from the beginning. Before we even create a list of learning objectives, we should create a list of evaluation objectives,” Thalheimer wrote.
Evaluation and feedback provide the fuel for L&D work. Feedback “enables us to maintain our most effective learning approaches and improve design factors that aren’t working,” he wrote.
Continuous evaluation
In employee performance, a continuous feedback and evaluation model is a mechanism to guide employee development and systematically identify and discuss strengths and weaknesses.
Whether applied to learners or to eLearning—or both—a continuous evaluation process would fill the same role.
- Evaluating eLearning continuously or iteratively provides opportunities to fine-tune it or to make larger adjustments as learners’ needs change. L&D teams are often tasked with improving eLearning based on the results of evaluations. If they are receiving meaningful evaluation and feedback from learners throughout the design, development, and iterative deployment of eLearning, they are better able to make meaningful improvements than if they get feedback only once a completed eLearning module has been deployed and used by a group of learners. Improvements based on this type of feedback may need to wait until the next update—in a year or more, in many cases. Building a mechanism for learners to provide feedback on eLearning at any time can do more than improve the eLearning; it can drive increased engagement and feeling of learner “ownership” of training as well.
- Evaluating learners continuously provides opportunities to tailor learning paths to individual or team needs. It’s easier to identify weak areas as they arise—a learner misses a target or performs poorly on one element of a year-long eLearning plan—than to go back and find those knowledge gaps during an annual review. On a micro level, evaluating learners during eLearning—dynamically, within the module—provides a basis for adaptive eLearning. The eLearning itself uses the learner’s responses to a question or exercise as feedback. This information guides the learner experience; learners are served more content on topics where they demonstrate less knowledge.
Where evaluation expertise fits into L&D
Thalheimer suggests that L&D professionals find a way to “educate ourselves” on meaningful evaluation. Read more about how L&D teams perform—and could improve—evaluation in Thalheimer’s report, Evaluating Learning: Insights from Learning Professionals.
Wagner sees data-related skills and competencies— from data science, computer science, and the learning sciences, focusing on technical standards, technology-based tool and platform solutions, and instrumentation—as part of the learning engineer’s role. Explore this emerging role and how it could shape the future of instructional design in Learning Engineering: A Primer.
Both research reports are free to eLearning Guild members.