Some years back a colleague had an office on the campus of a large residential treatment facility. There was onsite surgical and short-term hospitalization capability; because of this there were several huge generators onsite, located near her office. Every Friday, without fail, a team of facility engineers would come with their clipboards and run the generators for 30 minutes. They would then check a box indicating that the generators had tested A-OK.

Then Hurricane Floyd hit, knocking out power to the facility. The lights went out. The medical equipment shut down. The generators kicked on. And they ran…

... for 30 minutes.

Oops.

We see the problem with bad measures playing out in our daily lives: The customer service representative who is evaluated only on the number of calls he handles an hour. The organization that wants to measure social initiatives by the number of hits on a discussion forum rather than the quality of the comments, or meaningful connections made. The IT department that uses “number of tickets closed” as a measure of service. (I once got a new laptop because the “G” key popped off the keyboard of my old one. Ticket closed!)

Who is the measurement for?

I once asked a group of conference attendees why we measure, and they said: “For the LMS.” Well … OK. I’ve got nothin’ to say to that.

But others have said the measure is for the benefit or use of a human being, well, that’s better. Who, and what? A learner who wants to know how well she performs against some standard? A manager who will use the information in doing a formal competency assessment at year-end review time? As a good-faith attempt to evaluate whether a worker will provide more empathetic answers to an upset caller?

Or if the answer is, “for the bottom line,” then we need to know what performance exactly will affect that, and find measures for it that really measure it. Will sales go up? Can we track that back to the learning event? Will there be a reduction in customer complaints? What learner behavior will bring about that result? How can we assess that? (Note: A control group is a good place to start.)

What is the real result you’re after? For whom?

It’s no secret that courses often don’t have a way of reaching the learners they’re meant for, while eager-beaver learners find their way to every course available. What is the point of getting a bunch of completions from the wrong people? Or great test scores from workers who didn’t need the training to begin with? And on the matter of test scores: Is the one who scored 100 percent really better at doing her job? Is the one who scored 93 percent really a failure?

Begin with the end in mind: who is the target audience, what do they need to do, how do we measure whether they are the ones accessing the program, and how do we measure their performance? (An aside: Enrollment may have nothing to do with how good a program is. If enrollment is low, you may have a marketing problem.) Are you really after every single person in the organization contributing equally to some online conversation, or evidence of meaningful connections, emerging solutions, or new product ideas?

Meaningful measures

So: When looking for measures, try to find things that are meaningful, that give you real information to help real people do their jobs and to help organizations perform more efficiently. Beware of easy measures and vanity metrics. Your generators? Will they kick on and run for 30 minutes during a test—or will they hold up for two days during a power outage? Remember: If you’re going to grade the employee by the number of tickets closed, then don’t be surprised when people close tickets. What you measure is what you get.

Want more?

See my October 2012 column “Assessing the Value of Online Interactions” and “Design Assessments First” from April 2013. Also: Judith Hale’s Performance-Based Evaluation. San Francisco: Jossey-Bass/Pfeiffer, 2002.

And just in case: “How to Keep a Generator Running When You Lose Power.”