Julie Dirksen of Usable Learning is a learning strategy consultant; she writes, teaches, and consults on eLearning design. Her book, Design for How People Learn, is not only a fantastic introduction to instructional design, it is a fun and entertaining read. The eLearning Guild recognized Julie as a Guild Master at DevLearn 2016 Conference & Expo. She is a frequent presenter at Guild conferences and online events. I recently spoke with Julie about the challenges of getting training and behavior change to stick, and the role of feedback in influencing learner behavior. This portion of our interview has been edited for length and clarity.
Pamela Hogle (PH): eLearning encompasses a range from training to performance support and includes just-in-time information, job aids, automated reminders, etc. What is the best way to design an eLearning program that supports lasting behavior change?
Julie Dirksen (JD): When we get into the behavior change piece, a lot of times we’re looking at behaviors that are difficult to change. The way that I usually define it when I talk about it is: things where people know the right thing to do, but they’re still not doing it. …
We, as a field, in learning and development and in eLearning, have pretty good tools already for when the problem is really information-based—people just don’t know the answer, they don’t know how to use a particular piece of software; we’ve already got a decent tool set for those kinds of procedural or knowledge-based problems. But when that’s not enough, when you’ve got a more challenging behavior-change question, that’s when I think we need to start looking at some other models and other solutions.
The big issue I have with a lot of the behavior-change problems is really making sure that you are solving the right problem. One of our traditional approaches to behavior change is to just tell people louder and more emphatically that it’s really, really important. And I think we can all extrapolate how well that works in certain scenarios.
So, then it becomes an issue of looking at, ‘Well, what’s really going on?’ The answer is usually complicated stuff, right? So, for example, I’ve got a diagnostic worksheet that I use whenever I’m dealing with diagnosing a behavior-change problem.
One of the most common elements I see in behavior-change problems is an absence of concrete or visual feedback for the behavior. So, we really want people to do this, but nobody is paying any attention, and the person isn’t getting any feedback on whether they’re doing it right or even feedback acknowledging that they are doing the behavior. Basically, it’s kind of existing in a vacuum. I’d say that’s probably the number one reason why most of the behavior-change problems occur: an absence of visible or tangible feedback on the behavior itself.
For example, one of the behavior-change problems that I use as an example is hand washing in healthcare facilities. It’s a classic behavior-change problem that everybody understands. Every healthcare facility has wrestled with it. The people who are engaging in health care are good, dedicated professionals who want to do the right thing, and yet we still see problems with compliance with enough hand-washing behaviors. The estimates that I’ve seen, in terms of research studies, put it at somewhere between, on average, somewhere around 70 percent of required hand washing.
If we start to break that problem down and say, ‘What’s going on there?’ there is a bunch of things that are going on. One is the issue of absent or invisible feedback. Some hospitals have put programs in place so that people can get feedback on it. But the activity itself is, well, you have to have faith that your hands have bacteria on them. Everybody knows that they do, but you don’t really see it. I think that, if your hands turned blue when they had bacteria on them, we wouldn’t have a hand-washing problem. You’d go, ‘Oh, my hands are blue. I have to wash my hands.’ And that would be the end of the story. Everybody would be pretty much completely compliant.
But it’s not like that; it’s actually invisible. You know, intellectually, that your hands are clean after you’ve done a good job of washing them, but they don’t usually look all that different. In healthcare situations, it’s not like you’re scrubbing off dirt. You’re taking hands that ostensibly look clean, and you’re making them really clean, as opposed to just kind of clean.
PH: That’s what you mean by visible feedback, then? There’s no result from the action that looks different from when you didn’t do it?
JD: Yes. Those tend to be tough behaviors to change, because even when people know that it makes a difference and that there is bacteria, they are having to persist in the behavior based on this cognitive knowledge that there is bacteria, rather than some visible progress. …
If we talk about a traditional leadership activity that we’ve got a lot of training about—this is in every management training that you’re ever going to see: ‘How do you do a good job of giving feedback to your direct reports?’
That’s another one where, maybe you give somebody feedback, and there is an immediate visible benefit: They stop doing something that they were doing before, or they start doing something new. But a lot of times it’s not that concrete—it’s not that definite a behavior. And so, hopefully, you’re doing that thing [the new behavior], but you’re not really getting an immediate result.
When we think about Fitbit apps and things like that—the popularity of these fitness apps—a big part of what that’s doing is trying to take something that’s invisible and make it visible to people. So, for example, people want to exercise so that they can get into shape. But getting in shape isn’t a concrete goal per se. You could argue—I want to improve my blood oxygen; I want my pulse to be this instead of that when I do a 30-second step test. That is something that makes it concrete. Going for a walk is not; you know you’ve done something, but you don’t know what you’ve accomplished. So, the Fitbit takes that and turns it into actual numbers. You have a concrete goal. You can take something that’s a little bit vague—I should really get more exercise—and you can turn it into something where you can actually measure it and track it and see your progress. That’s what the benefit of those devices is.
Getting that kind of real, visible feedback is hard, and it’s rare, and it happens slowly. It’s tough to stay motivated if you aren’t getting some kind of encouragement with it. So, that’s a case of diagnosing: Is one of the problems with this behavior change that people aren’t getting tangible feedback on what they’re doing? And if that’s the case, then, guess what? It may not be a training problem at all; it may be a problem about what the feedback mechanism is on it. But then at least you’re not trying to solve the wrong problem.
PH: You’ve mentioned the VHIL (Virtual Human Interaction Lab) at Stanford in some of your work; they are studying whether immersive experiences drive behavior change. What do you think of the potential of these types of experiences to have long-lasting effects on learners’ behavior?
JD: I haven’t seen anything yet where they’ve done long-term research on using that strategy, but this study was interesting to me for a couple reasons. One has to do with the idea of knowledge versus belief. We tend to adapt our behavior based on belief: I believe that this is a good thing to do; I believe that this is important—as opposed to I know.
I realize that’s kind of a weird, subtle distinction, and I’m working on clarifying that distinction. There’s something about visceral experience that seems—I’m using very qualified terms because, again, there isn’t that much research yet—it seems that having a visceral experience, rather than being cognitively told some things, may make a difference in not only your knowledge of the topic but also your belief in its importance or your belief in taking action on it. So that’s one of the reasons that I think that particular study is interesting.
I don’t know how we’re doing longitudinally—whether it’s more durable in terms of effect. [Editor’s note: The VHIL is conducting longitudinal studies now, but the published research looks only at behavior changes immediately following an immersive experience.] We know behaviors are more durable when people perceive them to be tied to their values or they see some bigger purpose. We know that when we use extrinsic reward systems—we’re essentially sort of bribing people into the behavior—that that’s not a very durable model. There are better and worse qualities of motivation. If you tie behavior to people’s value system and help them see how it’s part of things they value or things they think are important, it’s more likely to be a durable behavior. If you tie things to more intrinsic things, like relatedness or social connectedness, it’s also a more durable behavior. …
We talk about emotional appeal, and sometimes a little disdainfully because we’ve got this ‘religion’ of rational thought. But the issue with it is that we use emotionality as a gauge for how important something is: ‘If I feel strongly about it…’ We don’t talk about thinking strongly about something. There’s an element of emotionality there, and that’s how I know that something is important. If I don’t feel anything in particular about a topic, I immediately go, ‘Apparently, that topic is really not that important to me, and I don’t care that much. I am going to back-burner it.’ So, that’s part of behavior change as well. Because if something feels abstract, then my brain is going to decide that it might not be that important: ‘We’re not going to worry about that right now; I believe you, but it just doesn’t feel that important.’
That is why I think the Stanford virtual reality stuff is interesting. I think they’re getting at that level of understanding about particular topics. If I have physical experience of something, am I going to feel differently about it—and is that going to change my behavior?
PH: Is there anything else about approaching eLearning design that you’d like to share for readers who might be new to this?
JD: Tips for beginners—that’s what I do most of the time.
I think really understanding what kind of a problem you’re trying to solve is a big one; and not oversimplifying certain things.
My standard tip for beginners is: Do user testing of your stuff. That’s the short version of my tips—learn how to do good user testing. Steve Krug’s book Don’t Make Me Think is probably the best single resource for getting started with that.
I’d say if you are new to eLearning and you don’t do anything else—if you can do user testing of the materials that you’re making and make sure that you’ve got a good feedback loop on what you’re building, that’s probably the single most important thing that you can do.