Data collection and analysis is important to the success of L&D organizations. Having a measurement strategy in place is the foundation for the effort, yet devising a measurement strategy for L&D is difficult. This article outlines what it takes to create a strategy for identifying and measuring the business impact of learning and development, and provides links to some readily available resources.

Part of the difficulty in many organizations is the lack of any way to collect the information from within L&D. Another part of the difficulty is that senior management often has a mistaken understanding—based on their experience as learners in the education system—of the nature of the enterprise L&D function. And finally, much of the data is very elusive and hard to define in terms of cause and effect.

L&D must first identify the cost of skill gaps (which is a somewhat controversial topic among managers but is nonetheless a real thing), the cost of poor performance, and the value of correcting those shortcomings. The second difficult problem is connecting, in a way that management can agree to, the actual effect of L&D’s efforts. Done correctly, L&D’s efforts can correct problems and respond to needs. But getting there is a tough job!

Business performance measures

In the 2018 Guild research report Evaluating Learning: Insights from Learning Professionals, Will Thalheimer shows that most organizations do not measure the right things if they want to know whether learning is effective. Instead, 83 percent measure learner attendance or program completion; 72 percent ask whether it will help them do their jobs better or if they liked the training. These correspond to Kirkpatrick Levels of Training Evaluation 1 and 2, and they are not measures of business impact.

As Clark Quinn pointed out in "Quinnsights: The Difficulty of Measuring L&D" the Kirkpatrick Levels are not adequate to the task. And there is another problem with the way L&D has often measured results: They tend to be cost comparisons. But according to Quinn, "It’s not just about L&D efficiency. It absolutely has to be about our impact. It’s been comfortable to coast along ensuring that our costs are no worse than industry average, but eventually someone is going to want to determine whether the L&D investments are good value. And as long as L&D operates as a cost-center, an expected investment with no real data on the efficacy, it will be the first victim in cost-cutting." Notice that it's efficacy that is key, not efficiency.

Break free from evaluating learner reaction and learning mastery. That's what Conrad Gottfredson's research also shows: Almost 70 percent of practitioners measure, and that's not right. ("Show Me the ROI.")

Every line manager can quote Peter Drucker: “What gets measured gets improved.” But not everything that matters can be measured, and to make it worse, what's hard to measure is hard to manage. This is especially true of the cognitive skills that make it possible to learn on the job. This is the heart of the challenge to L&D.

Meeting the challenge

What counts are business outcomes before and after training. These are major outcomes that are largely outside the exclusive domain of L&D, meaning that measurement will require collaboration with other departments and business units. This can be difficult but the information is important. For example, such outcomes include:

  • Productivity of the company and business units
  • Market share
  • Production errors
  • Scrap
  • Injuries
  • Employee performance on the job
  • Reduced time to competence

Use measurement language that makes sense to business leaders: execution, fulfillment, accomplishment, growth, increase, advance (see Marjan Bradeško’s article, "Can You Communicate with Senior Leaders in Their Language?" )

In her Nuts and Bolts column, “How to Evaluate eLearning”, Jane Bozarth has suggested alternative measurement models that may be helpful to you: Brinkerhoff’s Success Case Method (SCM) and the Stufflebeam model.

The Measurement and Evaluation Online Conference

The Measurement & Evaluation Online Conference (September 30-October 1, 2020) will explore further ideas, tools, and techniques for properly calculating, analyzing, and proving the effectiveness of your L&D efforts. Andrew Joly and Geoffrey Bloom, both with LEO Learning, will present an approach to creating a strong foundation for data collection, a step-by-step way to progressively build your team's capability and knowledge in order to deliver rapid results. You will learn different methodologies, practical processes, and examine case studies. You will learn about data streams and how they can be combined to provide a chain of evidence to develop a measurement story. Through examples and demonstration, you'll be in a position to develop the best playbook for your data and measurement strategy for your organization.

Register now for this online conference and learn new strategies for enhancing your L&D projects! If you are interested in attending this online event, but are unable to attend on either September 30 or October 1, register anyway and you’ll receive access to the recorded sessions and handouts after the event.