Many training pros disparage Kirkpatrick’s Level One evaluation. I’m not one of them. Sometimes disparagingly referred to as “smile sheets” or “happy sheets,” we often dismiss level-one evaluation tools as unimportant. One reason is that Level One evaluation tools often ask the wrong questions, are poorly designed and too long, and, ultimately, provide data that is not acted upon. Let’s fix this.
What not to ask
Level One evaluation gauges participants’ reactions to training from their perspective. So why do we ask participants questions they are not really qualified to answer? This may seem heretical, but here are some areas we might consider exploring less in our Level One end-of-course surveys:
- Asking questions about instructional design. Participants are not learning experts, so while it feels good for them to offer opinions of the course design, are they really in the best position to say, for example, whether the level of interactivity was adequate, or whether the instructor’s presentation style was appropriate? And their views can be inconsistent; one participant’s perception of a lousy course may be another’s dream learning experience.
- Asking questions about the materials. Whether it’s the participant guide, the PowerPoint slides, or, in the case of eLearning, screen design and navigation, participants have different perceptions of the efficacy and ease-of-use of learning materials, depending on their individual ability and experience in dealing with bad media. Easy or hard, it depends on whom you ask.
- Asking questions about the environment. From the comfort of the chairs to the temperature and lighting in the room to the quality of the food, we’re obsessed with asking about the learning environment. Of course a lousy environment can negatively impact learning, but don’t obscure your level-one evaluation strategy with a ton of these questions.
Many of you won’t agree with this. You’ll suggest that collecting this information from participants couldn’t hurt, or that participants deserve an opportunity to express their views. They do, but stuffing these questions into what might quickly become a bloated Level One survey that no one appreciates may not be the best approach. More on this in a minute. First, let’s discuss what we should ask.
A better focus
The focus of Level One evaluation should be on value, specifically the value of the program to the participant. Level One evaluation, at a minimum, should center on three primary questions:
- Was the program worth the time you spent on it? Why?
- Do you believe you will be able to improve your performance because of this program? How?
- Would you recommend this program to others? Why?
Each question speaks directly to program usefulness. You get participant perspective and establish an important baseline from which you can follow up with these same people—and their management—months later. Thus your Level One evaluation feeds into your level-three evaluation, a good test of the quality of your Level One effort. If you can use similar questions in a follow up down the road, you are likely starting out in good shape. Questions about design, course materials, or environment would rarely, if ever, be useful to ask later on; more than likely, the participants have forgotten about them.
There are many ways to ask these questions. You can format them as open-ended questions for participants’ responses in their own words. Or use Likert or semantic differential scales, but be sure to provide space for elaboration and comments. You can periodically run the evaluation as an end-of-class focus group, where the ensuing discussion will likely yield additional useful and interesting information.
What to do with those other questions
Now back to what to do with all those questions we love to ask, and might be helpful, but yield less value in the long run. There are better ways to get this information.
Want to gauge the effectiveness of the instructional design? Ask instructional designers (not necessarily the ones who designed the course). They, not the participants, have the right expertise. They can observe courses (classroom or eLearning), debrief instructors, and occasionally interview learners to understand where viewpoints line up or are in conflict. Remember, Level One efforts do not have to be just another end-of-course survey.
Interested in the quality of the materials (classroom or online)? Again, observation and debriefing, especially during pilots, is critical. It is the responsibility of instructional designers, media developers, editors, and SMEs to make sure the materials are top-rate before they get to the participants. If bad material finds its way into courseware, your quality control process is faulty. It shouldn’t happen.
And what about the learning environment? For classroom and eLearning scenarios, occasional focus groups and observational sessions with a representative sample of participants from a wide range of courses will give you the data you need.
These areas are particularly important when the three value questions (above) score low, which is why you should do more of this for low-rated courses. And, mixing in experts, rather than relying solely on the participants themselves, mitigates the all-too-common rush to judgment when even just a few learner responses are negative. Without expert input—and balance—our zeal to remedy every complaint and resolve every concern can sometimes make things worse.
Execs don’t care, but you should
Let’s face it; senior executives rarely, if ever, focus on Level One evaluation (if they do, it’s a cause for concern; they should have bigger fish to fry). But that doesn’t mean you should treat it cavalierly. If you are collecting Level One data, be smart about it. Don’t gather so much information from participants that you drown in it, ignore it, or dilute the survey so much that you lose sight of its most important point—measuring the worthiness of your efforts. If participants’ perception of training’s value to them goes up, the value proposition of your entire training function will certainly go up as well.