Here’s the problem (and you’ve been there): You move through an online course and reach the completion screen, which turns out to be 25 badly written multiple-choice questions asking about things like fine points of a policy or seemingly random definitions or rarely occurring product failures.

Why? Because the designer got the course finished, realized she needed something to prove completion, and went back scrambling for 25 multiple-choice questions. She pulled things that are easy to test, like “define” or “describe” or “match.”

The matter of assessment is one of the most consistent problems I see with instructional design. My beef isn’t even so much with badly written items as with what they are assessing, usually little more than knowledge and recall of often-irrelevant information. And it’s no wonder: When I was researching this column I pulled from my shelf one of the gold-standard instructional design texts from grad school to find exactly two pages on the subject of assessment. The disconnect between workplace performance, course performance objectives, assessment, and content is a huge contributor to learner failure.

Why does it happen?

For starters, assessment is too often an afterthought. Some blame may rest here with the ADDIE project-planning model so popular among instructional designers, with many equating assessing learner performance with evaluating the course. Apart from creating assessments is the matter of timing: We depend far too much on summative end-of-course assessment when formative knowledge checks along the way would serve the learner far better.

Another big reason: poorly written or academic learning objectives drive assessment items. Objectives that use verbs such as “list,” “define,” and “describe” practically force the designer to load bulleted content onto slides, followed up with a multiple-choice test or Jeopardy!-style quiz. The content is easy to write, and bullet, and test … but the items aren’t testing performance. Really: When’s the last time your boss asked you to list something? Do you want a washing machine repairman who can name parts or one who can fix the machine? How about the surgeon who will be taking out your appendix?

How to fix it?

Short answer? Create assessments before developing content. Once you’ve established the workplace performance, and set the performance objectives for learners, create the plan for how you’ll assess that. Course content and activities should support eventual achievement of those goals. My geekier readers will recognize a bit of similarity here to test-driven software development, a branch of the extreme programming movement. Those less geeky, or from other fields, might remember Stephen Covey’s mantra, “Begin with the end in mind.” Is the work performance that the washing machine repairman will replace the agitator? Assessing that may mean an interactive online simulation on replacing the washing machine agitator or an onsite supervised final exam in which the learner replaces the agitator. In such a case, design the course so they can do that. In our work the end should be successful work performance, not scoring 98 on a knowledge quiz.

So: Work backwards. Write the performance goals, decide how you will assess those, and then design the program. The content and activities you create should support eventual achievement of those goals. Otherwise other aspects of the design process will affect the assessment, and not in useful ways: Once you have the content, it’s just too easy to start pulling assessment items from it.

Want more?

Hale, J. (2002). Performance-Based Evaluation. San Francisco: Jossey-Bass/Pfeiffer.