Marc My Words: Don’t Dismiss Level One Evaluation

Many training pros disparage Kirkpatrick’s Level One evaluation. I’mnot one of them. Sometimes disparagingly referred to as “smile sheets” or“happy sheets,” we often dismiss level-one evaluation tools as unimportant. Onereason is that Level One evaluation tools often ask the wrong questions, are poorlydesigned and too long, and, ultimately, provide data that is not acted upon. Let’sfix this.

What not to ask

Level One evaluation gauges participants’ reactions to training from their perspective. So why do we ask participantsquestions they are not really qualified to answer? This may seem heretical, buthere are some areas we might consider exploring less in our Level One end-of-course surveys:

  1. Asking questions about instructional design.Participants are not learning experts, so while it feels good for them tooffer opinions of the course design, are they really in the best position tosay, for example, whether the level of interactivity was adequate, or whetherthe instructor’s presentation style was appropriate? And their views can beinconsistent; one participant’s perception of a lousy course may be another’sdream learning experience.
  2. Asking questions about the materials. Whetherit’s the participant guide, the PowerPoint slides, or, in the case ofeLearning, screen design and navigation, participants have differentperceptions of the efficacy and ease-of-use of learning materials, depending ontheir individual ability and experience in dealing with bad media. Easy orhard, it depends on whom you ask.
  3. Asking questions about the environment. Fromthe comfort of the chairs to the temperature and lighting in the room to thequality of the food, we’re obsessed with asking about the learning environment.Of course a lousy environment can negatively impact learning, but don’t obscureyour level-one evaluation strategy with a ton of these questions.

Many of you won’t agree with this. You’ll suggest thatcollecting this information from participants couldn’t hurt, or thatparticipants deserve an opportunity to express their views. They do, butstuffing these questions into what might quickly become a bloated Level Onesurvey that no one appreciates may not be the best approach. More on this in aminute. First, let’s discuss what we shouldask.

A better focus

The focus of Level One evaluation should be on value, specifically the value of the program to the participant.Level One evaluation, at a minimum, should center on three primary questions:

  1. Wasthe program worth the time you spent on it? Why?
  2. Doyou believe you will be able to improve your performance because of thisprogram? How?
  3. Wouldyou recommend this program to others? Why?

Each question speaks directly to program usefulness. You get participantperspective and establish an important baseline from which you can follow upwith these same people—and their management—months later. Thus your Level Oneevaluation feeds into your level-three evaluation, a good test of the qualityof your Level One effort. If you can use similar questions in a follow up downthe road, you are likely starting out in good shape. Questions about design, coursematerials, or environment would rarely, if ever, be useful to ask later on;more than likely, the participants have forgotten about them.

There are many ways to ask these questions. You can format them asopen-ended questions for participants’ responses in their own words. Or useLikert or semantic differential scales, but be sure to provide space forelaboration and comments. You can periodically run the evaluation as anend-of-class focus group, where the ensuing discussion will likely yield additionaluseful and interesting information.

What to do with those otherquestions

Now back to what to do with all those questions we love to ask, andmight be helpful, but yield less value in the long run. There are better waysto get this information.

Want to gauge the effectiveness of the instructional design? Askinstructional designers (not necessarily the ones who designed the course). They,not the participants, have the right expertise. They can observe courses(classroom or eLearning), debrief instructors, and occasionally interview learnersto understand where viewpoints line up or are in conflict. Remember, Level Oneefforts do not have to be just another end-of-course survey.

Interested in the quality of the materials (classroom or online)? Again,observation and debriefing, especially during pilots, is critical. It is theresponsibility of instructional designers, media developers, editors, and SMEsto make sure the materials are top-rate beforethey get to the participants. If bad material finds its way into courseware,your quality control process is faulty. It shouldn’t happen.

And what about the learning environment? For classroom and eLearningscenarios, occasional focus groups and observational sessions with arepresentative sample of participants from a wide range of courses will giveyou the data you need.

These areas are particularly important when the three value questions(above) score low, which is why you should do more of this for low-ratedcourses. And, mixing in experts, rather than relying solely on the participantsthemselves, mitigates the all-too-common rush to judgment when even just a few learnerresponses are negative. Without expert input—and balance—our zeal to remedyevery complaint and resolve every concern can sometimes make things worse.

Execs don’t care, but you should

Let’s face it; senior executives rarely, if ever, focus on Level Oneevaluation (if they do, it’s a cause for concern; they should have bigger fishto fry). But that doesn’t mean you should treat it cavalierly. If you are collecting Level One data, be smart about it. Don’t gather so much information fromparticipants that you drown in it, ignore it, or dilute the survey so much thatyou lose sight of its most important point—measuring the worthiness of yourefforts. If participants’ perception of training’s value to them goes up, the value proposition of your entire trainingfunction will certainly go up as well.

Want to learn more?

Thereare literally hundreds of web resources on Kirkpatrick’s model. Click here, here, and herefor a taste.

Share:


Contributor

Topics:

Related