The key to creating great eLearning is keeping the learners in mind. That sounds simple, but it can be devilishly difficult. One way to keep learners’ needs and abilities front-and-center is user testing.
Learning architect and Guild Master Nick Floro, president of Sealworks Interactive Studios, swears by user testing at every stage. He starts a project by testing ideas with the audience, and as his teams progress through design and development, they test at each stage. Floro says that testing with about 10 to 15 actual users is the “magic number.” According to Floro, it is not necessary to recruit new learners to test each iteration of an eLearning project or even different projects. “You can use these people again and again, and they love participating; they feel like part of the process. Ask them if they would participate again as you evolve a concept or project, and take advantage of that,” he said. “They become advocates to help communicate the project and help you launch successfully, as well as being a vital source of feedback to test an idea or concept. This is a win-win.”
The value of user testing is also clear to Guild Master Michael Allen, chairman and CEO of Allen Interactions, who pioneered a development model that incorporates successive approximations of an eLearning module; user testing is built into every step of the process. “Early evaluation is input that either confirms or refutes the correctness of the analysis or design,” Allen writes in his Guide to e-Learning (see References).
Allen’s development model specifies prototyping, rather than simply specifying or drawing models of eLearning modules. “With functional prototypes, everyone’s attention turns to the most critical aspect of the design, which is the interactivity, as opposed simply to reviewing content presentations and talking about whether all content points are in evidence on the screen,” Allen writes. “We also know that many designs that are approved in a storyboard presentation are soundly and immediately rejected when they are first viewed on the screen, and they are even more likely to be rejected when they become interactive. Prototypes simply provide an invaluable means of evaluating designs.”
Some instructional designers (IDs) who use personas to model targeted learner groups test with actual learners—users—in addition to designing to meet the personas’ needs. Personas and user testing are “symbiotic and hopefully reinforcing,” said Lacey Jennings, a service delivery leader at Xerox Learning Services. “There will always be a need for user testing for new interfaces and websites—personas just help you dig deeper into understanding which learner segment you need to include or consciously motivate.”
Megan Torrance, CEO of TorranceLearning, said that in creating personas, “we, as instructional designers, can stay connected to the learners we’re supporting throughout the project.” The learner-testers can then, in effect, bring the personas to life: “We want to make sure that we include representatives from the personas in the testing.”
Catch—and fix—problems early
It is essential to create eLearning that is both useful and usable; the content has to be worth learners’ time, but the best eLearning in the world is useless if no one completes it. “You can build all kinds of learning, and people aren’t going to show up and take it because it means nothing to them,” said Sarah Thompson, a marketing and communications efficacy improvement manager at Pearson.
Thompson advocates using personas to capture the motivations, goals, and needs of learners—that can make learning more useful. And she says that “user testing may reveal additional context, or further qualification of motivations, that wasn’t revealed when you first defined personas—it [user testing] tells you more specifically [about] something such as sequencing,” meaning the order in which the learners use eLearning content.
Testing with members of the learner group who will ultimately use the eLearning lets developers see how real people navigate screens, how easily they complete basic tasks, and whether they remain engaged. If something is unclear—all of the learner-testers make the same navigation errors, for example, or learners consistently fail to correctly complete a section—identifying and fixing design problems during development saves time, money, and future frustration, ultimately making the eLearning more usable.
When creating eLearning that will be used on different types of devices—laptops, tablets, smartphones—it’s important to test on as many devices as possible. Learners interact with content differently on smartphones than they do on laptops, so developers should be sure to test both the way the eLearning works and the way learners use it across multiple devices.
While different from formative evaluation of the eLearning content, user testing can provide an indication of how effective the eLearning will be. Testers can observe whether learners engage with the eLearning and test how well they are able to recall or apply the content. Poor response to the content is best caught and fixed before the eLearning is released to a broader group of learners.
Collecting data and feedback during user testing of prototypes and applying that information to improve each iteration will ultimately result in eLearning that is both usable and useful.