Evaluating the effectiveness of eLearning is a key element of L&D teams’ work: It’s needed both to improve on their efforts and to demonstrate the value of what they do. It’s also notoriously difficult.

For decades, the Kirkpatrick Model has been the default approach to evaluating eLearning and other training. But, as Jane Bozarth, The eLearning Guild’s research director, points out in her recent L&D Research Essentials report, it’s not the only way to measure training impact. Three  alternative methods of evaluating eLearning might be worth considering.

The Learning-Transfer Evaluation Method

Will Thalheimer of Work-Learning Research, Inc. devised the eight-tiered Learning-Transfer Evaluation Method, or LTEM. LTEM is a response to and expansion of the Kirkpatrick Model; it is aligned with learning science and “is intentionally designed to catalog a more robust set of requirements than the Kirkpatrick-Katzell model—requirements targeted to overcome the most urgent failures in learning practice,” according to a report Thalheimer wrote to accompany the one-page model summary.

The lowest levels of LTEM resemble Kirkpatrick Levels 1 and 2, measuring attendance at, completion of, and participation in learning activities. Measuring presence or attention does not, Thalheimer emphasizes, validate learning; these activities alone do not indicate learning.

LTEM’s third tier measures learners’ perceptions. A common misapplication of Kirkpatrick and “smile sheets” is to conflate positive learner response to eLearning with actual learning. Simply enjoying an activity—or even believing that it will help you on the job—is not an indication that you’ve learned something from the activity. Thalheimer divides his level three into two categories, depending on the rigor of the questions posed to learners about their experience.

Levels four, five, and six begin to address what learners got out of the eLearning: Do they remember facts and terms? Given a realistic scenario, do they make competent decisions? Can they perform related tasks competently? Do they still remember, make good decisions, or perform new skills several days after the learning event? These levels still address hypotheticals, though—performance on a quiz or in a learning scenario.

Level seven moves in to learning transfer: How do learners perform on the job? And the top level, eight, moves beyond the learner to look at the effects of learning and transfer of that learning on the learners, their colleagues, their organization—and beyond.

According to Thalheimer, only levels five through eight provide information that validates that learning has occurred.

Thalheimer authored Evaluating Learning, an eLearning Guild research report that explores how L&D teams evaluate learning and how they’d like to improve those efforts.

The Success Case Method

A system-level view of evaluation, Robert Brinkerhoff’s Success Case Method acknowledges that the success of training relies in part on organizational factors; looking only at training is unlikely to create sustained performance improvements. “When training is simply “delivered” as a separate intervention, such as a stand-alone program or seminar, it does little to change job performance,” Brinkerhoff wrote.

Brinkerhoff suggests focusing on three questions:

  1. How well is our organization using learning to drive needed performance improvement?
  2. What is our organization doing that facilitates performance improvement from learning? What needs to be maintained and strengthened?
  3. What is our organization doing, or not doing, that impedes performance improvement from learning? What needs to change?

These questions should be part of an evaluation strategy that aims to build capacity throughout the organization to improve training and performance.

A Success Case Method (SCM) study entails:

  1. Identifying teams or individuals who have successfully applied training to achieve improved performance and/or business results.
  2. Interviewing the individuals and documenting their success.
  3. In nearly all SCM studies, cases on “nonsuccess” are also examined, with close attention to organizational and other factors that differed between successful and unsuccessful cases.

“Just as some small groups have been very successful in applying a new approach or tool, there are likewise other small groups at the other extreme that experienced no use or value. Investigating the reasons for lack of success can be equally enlightening. Comparisons between the groups are especially useful,” Brinkerhoff wrote.

The CIPP Evaluation Model

The Context, Input, Process, and Product (CIPP) Evaluation Model, devised by Daniel Stufflebeam, may be useful to evaluate a curriculum, academy, or multi-course certification workshop, Bozarth writes. It provides a systematic approach to examining the curriculum development process, rather than examining either the learning product or the results.

The CIPP Model can be used to guide design, development, and assessment of learning projects and to evaluate the effectiveness of the learning program. In using CIPP to guide curriculum or learning design, and to evaluate the results, L&D practitioners should examine:

  • Context: What are the goals? Is eLearning needed? How does this eLearning project relate to other training materials?
  • Inputs: Consider the target audience of learners. What knowledge do they already have? What materials and devices are available? What practical problems will the eLearning help them solve?
  • Process: Will learners have opportunities to use and apply the information and skills during training? What is the environment where they will consume and use the eLearning? How will the learning process be evaluated?
  • Product: What type(s) of assessment will be used? Will assessment occur during or only at the end of the eLearning? What will be assessed? How do learners use what they were taught? What were the outcomes and how do they compare with the goals?

These questions can be asked during all stages of learning design, development, and implementation—they need not be left for post-eLearning evaluation. In fact, a unique aspect of the CIPP Model is that it is meant to be used throughout the process, while other methods look only at whether training “worked.”

Dive into learning research

L&D Research Essentials looks at research on 10 other areas of eLearning design, development, and impact, not only evaluating eLearning. Download your copy today and dive into current and highly recommended peer-reviewed research on the topics most relevant to you.