The problem with learning data

You probably know the saying, “If all you have is a hammer, everything looks like a nail.” It is a warning not to focus on a particular tool before you define the problem you are trying to solve. Otherwise your solution may be biased by the capabilities of the technology.

The same can be said about the data that learning and development (L&D) teams collect. If all you have are “smile sheets” (post-course evaluations), then everything looks like a popularity contest. If all you have is participation data (class attendance or course launches), then everything looks like an award for showing up. If all you have is completion data, everything looks like compliance.

It’s not easy to obtain learning data. We can’t see inside people’s minds to determine whether or what they learned. That’s why we need to use proxy data, information that results indirectly from learning: For instance, we can look at a reduction in accidents to show the success of safety training. But these proxies have their own challenges. Often, the effort to collect proxy data exceeds the value that it produces. It is also hard to be sure if the results are really attributable to learning. For instance, an increase in sales after sales training could be caused by a new product release that happened at the same time.

Given the difficulty in delivering value and the lack of available resources in most L&D departments, it is understandable that L&D teams will gravitate to data that is easy to acquire. This is especially true because learners have limited time. If you ask them to spend more time helping you record learning data, they will respond by refusing to participate. I’ve heard stories of five-page evaluations, 40-question assessments and monthly questionnaires. Wasting employees’ time like this is a recipe for disaster.

So instead we chase after data that is easy to get—even though it may be of little use. It’s like telling your dermatologist about your heart rate because that’s what your smart watch gives you. It may be tempting to collect meaningless data just to satisfy a boss who claims to be “data-centric,” but that is short sighted. In the end, all data collection costs the company in lost resources. For that investment to be justifiable, the resulting data needs to answer important questions, and it needs to be actionable. If you don’t know what questions need to be answered and what actions need to be taken, how can you really know what data needs to be collected?

Why collect data?

To resolve this issue, learning leaders need to explore the reason learning data is needed in the first place. To dive into that question, you need to know why L&D departments exist. Every L&D department was created by a leader in the business who wanted to control the risks inherent in managing people. Because L&D usually does not produce a profit, it is a liability. Therefore the L&D department will continue to exist only as long as management believes that the risk mitigation it provides is worthwhile. To make this determination, they need to know if the risks are being mitigated:

  • Are people engaged in their work?
  • Are they meeting their goals?
  • Are they adopting changes?
  • Are they compliant with regulations?
  • Are we spending too much money on learning that isn’t being used?
  • Are we not spending enough money on learning that is needed?

Data can be used to improve processes, thereby making better learning content available to more people. It can also be used as a benchmark for measuring the success of those improvements. Data also helps us remove bias from our processes.

How much data is enough?

The data you collect may seem to be a free result of doing business. However, the effort to collect, store, retrieve, process, migrate, and maintain data incurs costs. These costs are based on volume. Therefore it is important for learning leaders to think about what volume of data is sufficient to solve our problems.

Unfortunately, our greatest risk is often regulatory compliance, and compliance plans are usually written in a language of absolutes: zero-tolerance; 100% adherence. This usually isn’t necessary; nor is it really feasible. However, this attitude has infected most learning departments and has spawned an LMS industry whose complexity is staggering. Even when compliance isn’t an issue, L&D departments are held hostage to an unnecessary level of data reporting.

There was a commercial years ago where a city mayor was announcing the demolition of a building but to his horror, when he pressed the plunger, five buildings were destroyed. When asked why, the demolition experts said, “We had the extra dynamite.” Just because you have the capability to collect tons of data doesn’t mean you should.

We had the extra dynamite

"We had the extra dynamite." Illustration by Adam Weisblatt

One solution is to use sampling. Much like the way TV ratings are collected, you can use a smaller subset of the data to infer the results of the whole. On the other hand, when your data is incomplete, you can focus on the changes in data. For instance, if you have course completion data that may be only 70% accurate (not everyone finishes a course completely,) you can still measure the percent changes in completion rates over time, because the accuracy rate should be consistent across the entire data set.

A big impediment to collecting data is in trying to associate activities to identified users. This effort is limited by the availability of consistent, timely, and accessible HR data, and it brings up privacy issues that need to be addressed, especially in an international workforce. For regulatory compliance, there may not be a choice, but not every situation requires user-identified data. Google Analytics, used by marketing people in every industry to measure website traffic, has no user data. It isn’t necessary.

What data should we collect?

I had a co-worker who refused to collect data that she knew would not be acted on. Collecting learning data is challenging, and she wasn’t going to waste her time if no one else would put in the time to make use of the results.

When choosing what data to capture, you need to start with the actions you expect people to take. Here are some actions and possible data points:

Actions

Datapoint

Invest more in learning if underspent.

Reduce costs to make learning more economical.

Cost per participant.

Retire unused courses and increase investment in courses with higher usage rates.

Course usage over time.

Improve compliance by reaching out to managers.

% completion of assigned training.

Encourage a continuous learning culture through local learning day campaigns.

Average course attendance/launches per person per month/quarter/year.

Improve awareness of initiatives by providing links in corporate communications using different channels.

# of referrals for incoming traffic from each location where courses are promoted (email, intranet etc.)

The datapoint examples above are quantitative. They are numbers produced by user actions. This type of data collection has two advantages: It is automatic and more objective.

The other type of data collection is qualitative. You are basically asking the learner (or their manager) questions about their experience. This takes up people’s time, and it is subjective and sometimes prone to bias. Why do it then? Qualitative data is usually more rich with direct knowledge, and the results don’t have to be inferred.

For qualitative data you still need to focus on the action you will take on the results, but you also have to reduce the overhead for learners, managers, and L&D teams. Most importantly, you need to make sure that you have removed as much bias from the process as possible. A good way to accomplish both tasks is with small samples. This way you impact fewer people, and you can compare data sets to see if the results are skewed by bias.

Taking an action-centric view of data collection when formulating a data strategy will return value not only in the improvements made from the results but also the improved morale of your team and improved satisfaction of learners.

Explore learning leadership issues with your peers

Learning leaders and aspiring leaders who are seeking the strategies and skills required to navigate the needs of today’s ever-changing workplace do not need to figure it all out on their own. Connect with a community of your peers to help you explore and resolve today’s biggest learning leadership challenges.

The Learning Leaders Alliance is a vendor-neutral global community for learning leaders who want to stay ahead of the curve and for aspiring leaders wanting to build their skillsets. The Learning Guild’s Alliance Membership package includes access to exclusive digital events and content curated for today’s modern learning leader, as well as opportunities to attend in-person learning leadership events held around the globe. See the details on our website.