Nuts and Bolts: What You Measure Is What You Get

Some years back a colleague had an office on the campus of alarge residential treatment facility. There was onsite surgical and short-termhospitalization capability; because of this there were several huge generatorsonsite, located near her office. Every Friday, without fail, a team of facilityengineers would come with their clipboards and run the generators for 30minutes. They would then check a box indicating that the generators had testedA-OK.

Then Hurricane Floyd hit, knocking out power to the facility.The lights went out. The medical equipment shut down. The generators kicked on.And they ran…

… for 30 minutes.

Oops.

We see the problem with bad measures playing out in our dailylives: The customer service representative who is evaluated only on the numberof calls he handles an hour. The organization that wants to measure socialinitiatives by the number of hits on a discussion forum rather than the qualityof the comments, or meaningful connections made. The IT department that uses“number of tickets closed” as a measure of service. (I once got a new laptopbecause the “G” key popped off the keyboard of my old one. Ticket closed!)

Who is the measurement for?

I once asked a group ofconference attendees why we measure, and they said: “For the LMS.” Well … OK.I’ve got nothin’ to say to that.

But others have said the measure is for the benefit or use ofa human being, well, that’s better. Who, and what? A learner who wants to knowhow well she performs against some standard? A manager who will use theinformation in doing a formal competency assessment at year-end review time? Asa good-faith attempt to evaluate whether a worker will provide more empatheticanswers to an upset caller?

Or if the answer is, “for the bottom line,” then we need toknow what performance exactly will affect that, and find measures for it thatreally measure it. Will sales go up? Can we track that back to the learningevent? Will there be a reduction in customer complaints? What learner behaviorwill bring about that result? How can we assess that? (Note: A control group is a good place to start.)

What is the real result you’re after? For whom?

It’s no secret that courses often don’t have a way ofreaching the learners they’re meant for, while eager-beaver learners find theirway to every course available. What is the point of getting a bunch ofcompletions from the wrong people? Or great test scores from workers who didn’tneed the training to begin with? And on the matter of test scores: Is the onewho scored 100 percent really better at doing her job? Is the one who scored 93percent really a failure?

Begin with the end in mind: who is the target audience, whatdo they need to do, how do we measurewhether they are the ones accessingthe program, and how do we measure their performance? (An aside: Enrollment mayhave nothing to do with how good a program is. If enrollment is low, you mayhave a marketing problem.) Areyou really after every single person in the organization contributing equallyto some online conversation, or evidence of meaningful connections, emergingsolutions, or new product ideas?

Meaningful measures

So: When looking for measures, try to find things that aremeaningful, that give you real information to help real people do their jobsand to help organizations perform more efficiently. Beware of easy measures andvanity metrics. Your generators? Will they kick on and run for 30 minutesduring a test—or will they hold up for two days during a power outage? Remember:If you’re going to grade the employee by the number of tickets closed, thendon’t be surprised when people close tickets. What you measure is what you get.

Want more?

See my October 2012 column “Assessing the Value of Online Interactions” and “Design Assessments First” from April 2013. Also: Judith Hale’s Performance-Based Evaluation.San Francisco: Jossey-Bass/Pfeiffer, 2002.

And just in case: “How to Keep a Generator Running When You Lose Power.” 

Share:


Contributor

Topics:

Related