Your cart is currently empty!

Nuts and Bolts: Gather, Understand, and Share Outcome Data
Mynew role as The eLearning Guild’s director of research is a huge change from mypast decades of work as a classroom and eLearning practitioner. The work isabsolutely in my wheelhouse, pushing me to learn something every day whileletting me use all that doctoral training in research methods and whatnot. While remaining mindful of the problem ofsurvey fatigue, we do conduct a few surveys a year to see what is happeningwith our membership, and I’m always gratified by the enthusiasm of responses aswell as the thoughtfulness of answers to open-text questions. One thing thathas leapt out at me, though, seems especially relevant and gave a nudge togenerate our most recent report, Ellen Wagner’s “Putting Data to Work.” What’sthat thing? The number of times we see this answer when we ask for outcome data:“I don’t know.”
Forinstance, this is what we reported in March’s report, “Using Social Tools for Learning”,via a survey conducted intermittently over the past decade:
“Anotherthing that has remained constant over the years is a certain lack of awarenessof what is going on organization-wide. Across the 2018 survey, the instances of‘I don’t know’ were striking: When asked about informal use of social tools byworkers, the average response to ‘I don’t know’ was 26.9%.”
Asimilar amount (26.5%) did not know their target audience’s preferred tool.Fifteen percent of respondents said they did not know their organization’spolicy on employee-generated content. And when asked how they will measure thesuccess of social tools use, nearly a quarter (22.3% of respondents) said “Idon’t know.” (p. 29).
Givenmy work over the past 10 years or so, I find this – is it too much to say“exasperating”? – as time and again objections to social media use from L&Dpractitioners have included some variation of “our people don’t like/don’tuse/don’t know how to use particular tools”, when more than a quarter of peopleresponding didn’t know what tools workers use informally or prefer. There are any number of ways to find out moreabout that, from simple surveys to worker Instagram contests. Likewise, there’sa lot of helpout there for those wanting to know more about what and how to measure in regardsto social tools use.
We saw somethingsimilar with the April report, “What’s Your Reality? AR and VR for Learning,” whichshowed there “appear to be challenges with linking AR and VR efforts toreal-world performance outcomes. Fewer than a third of respondents are tyingefforts to results, with VR far ahead of AR. In both cases, about a quarter ofrespondents said, “I don’t know” (p. 14).
Figure 1:Respondents indicated difficulty correlating activities with outcomes
Andstill more, from Jennifer Hofmann’s May report, “Blended Learning in Practice,” this time illustrating different problems: “Participantsin this study varied most dramatically in this aspect of theirapproaches—ranging from not collecting data at all, to collecting data and notusing it, to fully integrated key performance indicators (KPIs) tied toautomated data collection and downstream learner performance metrics. Thepredominant theme that emerged from the study was a relative sense ofinadequacy in evaluating blended instructional programs for organizationalobjective accomplishment” (p. 12). Participants in Hofmann’s study shone light onsome other problems:
- “…thebusiness unit gets less information than L&D do, and that’s just a cryingshame.” (p. 13)
And maybe worst of all: - “One common observation was that data often gotcollected about learners and the program delivery, but program sponsors were ata loss with what to do with it.” (p. 12)
Thismonth’s eLearning Guild research report, “Putting Data to Work,”may help practitioners think through ways of tying efforts to outcomes. Amongother things, author Ellen Wagner says, we need to figure out where the dataeven is: We need to start looking at data from theorganization’s currently preferred technology platforms and tools—user files,feedback forms, website forms, log-in information, downloads, content assetuse, platforms including LMSs, CRMs,HRISs, community portals, and the like (p. 10). Wagner reminds us that thisneedn’t be so difficult, and may not ever even require “big data,” and saysthat in thinking about useful data to ask which data really matter. Based on myown past work I’d add nosing around for data collected elsewhere: what about thingslike aggregate performance ratings, employee relations complaints, or turnoverrates reports amassed by HR, or injury/accident reports from Safety? What canthe Marketing department tell you about customer feedback? And don’t forget theamount of data available in the typical commercial LMS—how can you betterleverage that? If you lack awareness of organization-wide attitudes andpolicies, start with the HR manuals and build out by working on relationshipswith those who might some awareness or movement across organizational silos. Thisis harder for freelancers and contractors, I know, but try to find ways to geta bigger view of available data when you can. I recall working on a privateproject once; the company’s safety office said no one associated with traininghad ever asked for any data before – and they had mountains of it.
Andone more thing: Make sure the data you do get don’t live in your silo. Whatinformation would be useful to the stakeholders? What might learners want toknow? How and where could you share it? We talk about the need for L&Dpractitioners to become more data literate, but what about the managers ofthose workers? Or directors of work units? How can you help them make sense ofwhat you know, data-wise?
Gettinga better handle on worker information, organization philosophies and practices,and efforts to tie efforts to results and outcome data in eLearning can helpget funding, support for projects, and perhaps even put you in line for promotionalopportunities or, if you’re external, additional projects. It can shore up boththe quality and credibility of you and your work products. Find ways to answer,“I do know.”



