In my time as The Learning Guild’s director of research—has it really been 2 ½ years already?—I often ask, as part of surveys or during live events, what additional research people would like to see us publish. Many ask for ideas I pursue, like taking on a popular myth or a hot topic such as AI or virtual reality. Some ask for highly technical information, like data on the optimal number of minutes for a given item like a video or podcast. (Answer: It depends.) Others ask for things that are pretty easily just found in an internet search, some of which I captured in our October 2019 research report, Let Me Google That For You: LMGTFY. But quite a few people ask for more help understanding how to find, identify, and interpret good research. One of my own heroes, University of Virginia professor Daniel Willingham, opened a recent Facebook page post with this quote: "Teachers are interested in research and perceive value in research to inform their work. However...teachers struggle to identify sources of quality research and how to translate research to inform their teaching."
A goal of mine in this job has been to accomplish that very thing: To help people feel research is more accessible and less intimidating, to help interpret academic-ese where need be, to save readers time by cutting through swaths of existing material, and ultimately to help them apply research to improve their practice. I often include that kind of information in reports proper, for instance, sometimes offering annotated bibliographies of important work, taking care to define terms and suggesting important names to know, or discussing concerns about working with vague or conflicting material. Earlier this year I wrote about surprises I found in writing about L&D research. To continue that conversation, here are some common challenges I find in enacting my work—which I try to overcome to make our research products better for you.
Too much
While I was chomping at the bit to review the literature on learning styles, the sheer weight of it was daunting, going back decades and including what has been documented to include at least (and there may be more) 71 models of “learning styles”. Working with the literature involved narrowing my focus from “what does the research say about learning styles?” to “what does the experimental research say about tailoring instructional approaches to learning style?”
Too little
As I’ve written before, when embarking on the report on “generations” in the workplace I was surprised to find so little in the way of data from sources like HR offices showing connection between “generations” and things like performance issues or workplace conflict. I found virtually none. Further, there is really very little agreement about what constitutes a “generation”, how that idea differs from any other grouping of humans according to age, or what factors matter when considering the impact of generations on organizational performance.
Too new
Sometimes an idea is just too new to find much in the way of data. For instance, our recent report on Learning Experience Design (LxD), published partly in support of our Online Conference on the topic, proved challenging. The term LxD first appeared in 2007 and there’s nothing in the way of “hindsight” research to show that an LxD-based approach garners, for instance, better performance outcomes. We can examine elements of LxD that we know are good instructional strategies—like spaced repetition, for instance—but there’s not much on LxD per se.
Too old
A challenge in writing about technology is the double-edged reality of publication: We look for credible work that has been vetted. But research, particularly experimental endeavors, may be a year old before being submitted for publication, and peer review and publication can take another year or longer. So a piece published in a peer-reviewed journal in, say, 2018 may be reporting findings of an inquiry or discussing features of a technology that are already at least two years old.
Too obtuse
A favorite conversation in graduate school was, “Who is research for?” The truth is it is largely written by academics for other academics, and can sometimes feel like trying to read a foreign language, published in obscure-sounding and often expensive journals. In fact, one of my predecessors in this role said she hired someone to “academic up” her dissertation. I like to think that ultimately research is for the practitioner and try to build a bridge by helping to decipher and translate material for our readers, especially with an eye toward application or otherwise improving practice.
Too questionable
An issue that came to light when I was working on the Learning Styles report was the matter of predatory journals, those that will publish, for a fee, anything submitted with minimal editorial oversight. Last time I checked, every article I found claiming correlation between learning style and instructional approach was published in a predatory journal. So writing research is complicated by needing to check on the credibility of some of the publications themselves.
Want to learn more about research?
In the spirit of helping us all become better readers of research, The Learning Guild is launching a new initiative, #GuildReads. We’re planning some live events with some authors of our reports in which we can informally chat about their findings, as well as the process of conducting research. Think of this more as a “club” and less as a “presentation”. The first #GuildReads happens July 17. Here are more details about #GuildReads.
You can access our entire research library (free with a free membership) at www.learningguild.com.