My new role as The eLearning Guild’s director of research is a huge change from my past decades of work as a classroom and eLearning practitioner. The work is absolutely in my wheelhouse, pushing me to learn something every day while letting me use all that doctoral training in research methods and whatnot. While remaining mindful of the problem of survey fatigue, we do conduct a few surveys a year to see what is happening with our membership, and I’m always gratified by the enthusiasm of responses as well as the thoughtfulness of answers to open-text questions. One thing that has leapt out at me, though, seems especially relevant and gave a nudge to generate our most recent report, Ellen Wagner’s “Putting Data to Work.” What’s that thing? The number of times we see this answer when we ask for outcome data: “I don’t know.”
For instance, this is what we reported in March’s report, “Using Social Tools for Learning”, via a survey conducted intermittently over the past decade:
“Another thing that has remained constant over the years is a certain lack of awareness of what is going on organization-wide. Across the 2018 survey, the instances of ‘I don’t know’ were striking: When asked about informal use of social tools by workers, the average response to ‘I don’t know’ was 26.9%.”
A similar amount (26.5%) did not know their target audience’s preferred tool. Fifteen percent of respondents said they did not know their organization’s policy on employee-generated content. And when asked how they will measure the success of social tools use, nearly a quarter (22.3% of respondents) said “I don’t know.” (p. 29).
Given my work over the past 10 years or so, I find this – is it too much to say “exasperating”? – as time and again objections to social media use from L&D practitioners have included some variation of “our people don’t like/don’t use/don’t know how to use particular tools”, when more than a quarter of people responding didn’t know what tools workers use informally or prefer. There are any number of ways to find out more about that, from simple surveys to worker Instagram contests. Likewise, there’s a lot of help out there for those wanting to know more about what and how to measure in regards to social tools use.
We saw something similar with the April report, “What’s Your Reality? AR and VR for Learning,” which showed there “appear to be challenges with linking AR and VR efforts to real-world performance outcomes. Fewer than a third of respondents are tying efforts to results, with VR far ahead of AR. In both cases, about a quarter of respondents said, “I don’t know” (p. 14).
Figure 1: Respondents indicated difficulty correlating activities with outcomes
And still more, from Jennifer Hofmann’s May report, “Blended Learning in Practice,” this time illustrating different problems: “Participants in this study varied most dramatically in this aspect of their approaches—ranging from not collecting data at all, to collecting data and not using it, to fully integrated key performance indicators (KPIs) tied to automated data collection and downstream learner performance metrics. The predominant theme that emerged from the study was a relative sense of inadequacy in evaluating blended instructional programs for organizational objective accomplishment” (p. 12). Participants in Hofmann’s study shone light on some other problems:
- “…the
business unit gets less information than L&D do, and that’s just a crying
shame.” (p. 13)
And maybe worst of all: - “One common observation was that data often got collected about learners and the program delivery, but program sponsors were at a loss with what to do with it.” (p. 12)
This month’s eLearning Guild research report, “Putting Data to Work,” may help practitioners think through ways of tying efforts to outcomes. Among other things, author Ellen Wagner says, we need to figure out where the data even is: We need to start looking at data from the organization’s currently preferred technology platforms and tools—user files, feedback forms, website forms, log-in information, downloads, content asset use, platforms including LMSs, CRMs, HRISs, community portals, and the like (p. 10). Wagner reminds us that this needn’t be so difficult, and may not ever even require “big data,” and says that in thinking about useful data to ask which data really matter. Based on my own past work I’d add nosing around for data collected elsewhere: what about things like aggregate performance ratings, employee relations complaints, or turnover rates reports amassed by HR, or injury/accident reports from Safety? What can the Marketing department tell you about customer feedback? And don’t forget the amount of data available in the typical commercial LMS—how can you better leverage that? If you lack awareness of organization-wide attitudes and policies, start with the HR manuals and build out by working on relationships with those who might some awareness or movement across organizational silos. This is harder for freelancers and contractors, I know, but try to find ways to get a bigger view of available data when you can. I recall working on a private project once; the company’s safety office said no one associated with training had ever asked for any data before – and they had mountains of it.
And one more thing: Make sure the data you do get don’t live in your silo. What information would be useful to the stakeholders? What might learners want to know? How and where could you share it? We talk about the need for L&D practitioners to become more data literate, but what about the managers of those workers? Or directors of work units? How can you help them make sense of what you know, data-wise?
Getting a better handle on worker information, organization philosophies and practices, and efforts to tie efforts to results and outcome data in eLearning can help get funding, support for projects, and perhaps even put you in line for promotional opportunities or, if you’re external, additional projects. It can shore up both the quality and credibility of you and your work products. Find ways to answer, “I do know.”