You can easily psyche out a multiple-choice test, right? I mean, based on the fact that you're a developer of online instruction, I assume that part of the reason you reached your current position is that you're a good test taker.

Now you're on the other end of the spectrum, designing multiple-choice questions and hoping your learners' answer choices will be based on their actual knowledge and not their clever testmanship. So the challenge in writing these items is to create a valid measure of the learner's knowledge. We don't want learners to get clues from a question’s construction vs. its content, nor do we want to resort to trickery as the way to increase difficulty.

In educational settings, instructors of online learning courses often have the ability to assign written papers, that is, constructed response tests. But in the corporate and organizational training world, online courses are usually stand-alone and thus depend on selected response items that can be graded automatically, such as true/false, multiple-choice, and matching questions. Of these, multiple-choice questions are most often used. Here are some common things we expect multiple-choice questions to do; we’ll examine them one by one:

  • Be a true and fair test of the knowledge
    • Don't give away the right answer
    • Don't resort to trickery by trying to make a question harder
  • Minimize students’ ability to cheat
  • Provide ample feedback and remediation
  • (Nice to have) Use variables for behind-the-scenes calculations, feedback, branching, and record-keeping
  • Enable us to analyze how questions performed

A true and fair test of the knowledge

The thing about multiple-choice questions is that the answer is right there on the screen. So the challenge as question-writers is to construct the question and its answer choices in such a way that the learner really has to master the objective in order to select the correct choice.

Well-written multiple-choice items should have these preferred-practice attributes:

  • Most of the wording should be in the question stem.
    • Answer choices should be brief and parallel.
    • Each question should address a single topic.
  • Use three to five answer choices, with four being standard. Of those:
    • One choice should be the unambiguous correct answer.
    • One choice should be almost correct. The intent is to distinguish between those who truly know the content from those whose knowledge is more superficial.
    • A third choice can be like the previous one, or it can be less correct but sound plausible to the uninformed.
    • One choice should be clearly wrong (but in the same context).
    • If you occasionally use a 5th choice, then it’s a give-away if you make that the correct answer. (See item analysis, below.)
  • Research has shown that many instructors write multiple-choice items only at the recall level of knowledge. See my other article on writing them for higher-level thinking skills.
  • Use a simple sentence structure so the learner doesn’t have to guess what you’re asking.
    • If you want the question to be more difficult, do it with distractors that are carefully crafted around the knowledge itself, not with tricky wording or unclear meaning.
    • Avoid double negatives.
  • Keep a question’s answer choices approximately the same length. If there’s a single long choice it is usually the correct answer because it is full of qualifiers.
  • Does it surprise you that choices C and D are the most common place for the correct answer? Mix it up. And in doing so, especially if you’ll rely on automatic randomization, be sure the choices make sense in any order.

Multiple-choice questions aren’t only useful for testing purposes. I often use them as a way to increase the density of the instruction, especially where I expect learners have had some prior exposure to the content. By asking embedded questions early you can inject a little suspense, plus deliver some of the content in the feedback for increased efficiency.

Multiple-choice questions are also good self-check devices, especially if your authoring system or learning management system (LMS) allows tailored feedback at the answer level and not just the overall question level. Speaking of self-checks, see how you do on a few typical questions. (Remember what I said about psyching out tests? I’m guessing you can score 100% without any instruction on the content.)

Give-aways. See the next sections for answers.

  1. Which one is true of a free market?
    1. Governments intervene
    2. Governments set prices
    3. Governments make production decisions
    4. Supply and prices balance based on scarcity and desires
  1. Economics studies which of the following concepts?
    1. Consumption decisions
    2. Production technology
    3. How a society decides what and how it will produce its goods and services, how it will distribute them, and for whom
    4. How to govern
  1. What math word that comes from the Latin word for “to put out” is defined as a rate of change that increases over time?
    1. Algorithmic
    2. Geometric
    3. Exponential
    4. Fractions
  1. What is the name for the sharp tooth in mammals, such as dogs, that are good for tearing meat?
    1. Incisors
    2. Canines
    3. Cutters
    4. Bovines

Answers

Question 1: D

The first three choices sound anything but “free.”

Also, the most common location for the correct answer is choice C or D.

Question 2: C

Note all the qualifiers in choice C – it’s really a definition; a dead give-away. Remember to keep answer choices parallel and about the same length.

Question 3: C

Is this question meant to measure my knowledge of Latin, or the definition of “exponential”? As you can see in this example, it is difficult to do both in one question. In this question it really doesn't matter if I know the Latin root or not; it’s not a hint to the correct answer. If you really want to assess that knowledge then use a separate question just for that.  Each question should test a single parcel of knowledge unless it is intentionally testing more complex concepts or applications.

Note the principal distractor, “Geometric,“ the most likely term people confuse with exponential.

Question 4: B

Learners will thank you for the dog/canine hint, but why give away the answer?

Minimize students’ ability to cheat

Some students may feel compelled or otherwise tempted to cheat on tests. This can occur at two levels.

Level 1: Who is the test taker? Make sure the test-taker is who they say they are. Recently in the United States a test-for-hire scheme was uncovered on college placement exams. It turns out there was an easily exploited flaw in the personal identification process.

Level 2: Thwarting individual cheating. In my experience, LMSs don't provide much help in the question design process. However, a LMS can greatly increase test security by randomizing the sequence of questions or choices within questions.

Let me add a word here about authoring system and/or LMS/CMS capabilities (Learning or Content Management System). What I will describe in the next few paragraphs are features that are very useful from test security and management perspectives. But each LMS is different, so you will have to know the capabilities of your particular system, including whether you can customize standard reports.

Now let’s get back to test security. If your LMS can randomize the sequence of questions or, better yet, draw a stipulated number of questions from a larger question bank, then each student gets a slightly different yet equitable test. It is desirable to cover each objective no matter which questions the LMS draws. Many LMSs organize questions by objectives or groups and then draw a sample from each group.

Randomization has a couple of implications. If you like to give students confidence-building questions early in a test to help them overcome test anxiety, you may or may not retain this capability with randomized selection. With randomization, it is also possible that one question could give away the answer to another if it comes first.

You may also be able to randomize the sequence in which the LMS lists answer choices within each question. Here again the reason is to preclude the usefulness of “cheat sheets.” Think through what randomization means. Do you like to use the phrase, “All of the above”? Buzzer sound here: that won’t work with random sequencing. I don’t like that phrase anyway: it’s often a give-away.

Even though the subject of this article is multiple-choice questions, I want to mention one thing about matching questions. In many ways they follow similar rules to multiple-choice question design. I've noticed most authoring systems and LMSs treat matching questions differently than I would prefer. Instructionally I like to have an unequal number of choices in the groups that are being matched. This way the learner cannot get the last one or two pairs correct merely by process of elimination. With an uneven number there will always be one left over. But some LMSs only offer equal size lists. Sigh.

Provide ample feedback and remediation

Unless a test is purely for assessment and the only feedback will be the student’s score, I always like to include instructive feedback for each question. The amount depends on the purpose of the test and the difficulty of the topic. Is this job training where we want employees to truly master each objective? Then we’d better catch misconceptions and remediate at that famous “teachable moment.”

Sometimes it is sufficient to give feedback at the question level. But I often like to give specific feedback for each answer choice, even more so when I use multiple-choice questions for embedded self-checks. I want to clear up any misunderstandings before the learner proceeds.

Use variables, branching and logic

Sometimes we may want to elaborate more than a feedback block will permit, or even loop the learner back through the module or take them to an alternative description with more examples. This means we need branching and logic, and in its more exotic forms it may require variables and logic. We have said it before: it all depends on your authoring system and LMS/CMS capabilities.

Item analysis

You have put all this care into the design of your questions. How did those questions perform? Were the range and average scores about what you expected? Did your carefully constructed distractors help you discriminate those who know the material well from those who don’t? If a question was missed several times, was it missed by those with lower scores as you would expect, or did some of the high scorers also miss it? Which choices did learners select?

The answers to these questions complete the circle. You have designed the content and delivery of the training, and you carefully constructed the test (or quizzes or embedded self-checks). The item analysis report can now tell you not only if learners “got it” – scores alone can tell you that part – but also give you insight into what they were thinking.

It’s wonderful if your LMS can produce item analysis reports, even more so if you can tailor the output to serve your needs precisely. However, in the absence of such automated reports you can perform item analysis with just a pen and paper as long as you can determine which answers each student selected. You may have to drill down into detailed records one by one (I hope the data will be more accessible than that), but assuming you can find the raw data, all you have to do is tally the number of times each answer choice was selected for each question.

Take a look at the sample report in table below. What can you infer about these eight questions?

Here are some of the things I see in this report; you may see other details:

  • Looking at column 3 (“% Correct Whole Group”), we see that three questions scored rather low; #5 was only answered correctly by 40% of this group of students (8 out of 20).
  • Looking at the responses for #5, we see that everyone who missed it selected the same answer choice, C. (The correct choice for each question appears in bold.) Furthermore, it was missed somewhat equally by students in all three scoring ranges (overall, top third, and bottom third). It appears that either the question is misleading in a consistent way, or the instruction was lacking. This report helps us see where we may need further analysis.
  • Remember what we said earlier about questions with five choices? We see it graphically here: #1 and #7 are the only questions with five choices, and the 5th choice is the correct answer in both cases. Recommendation: Try harder to have four distinct answers.
  • What about question #4? Those who missed it chose the full range of wrong answers. This is what you would expect, especially if it’s a difficult question to start with. We’re each unique and may interpret complex concepts in a variety of ways.
  • Question #1 did a good job of finding those learners who knew the material except perhaps for one important nuance. (I assume choice D is the distractor that is close in meaning to correct answer E.)
  • Look at the % Correct columns. First, your LMS may or may not compile this type of data. (As a side note, this may be an acid test of whether trainers or data base specialists designed the LMS, along with the feature of having unequal lists for matching questions.) You would expect only occasional misses in the top-third column and more frequent misses in the column for those scoring in the bottom third. Question #5 again stands out because so many in the top third missed it.

Conclusion

I have presented a brief review of design principles for multiple-choice questions along with ways to use your authoring system and/or LMS/CMS to help manage test security and analyze how the questions themselves performed. You can find much more on these topics with quick Web searches. Use the references in this article as a starting point if you wish. And be sure you know the full capabilities of your LMS.

References 

Classroom assessment. Online course developed by Pinellas School District and the Florida Center for Instructional Technology at USF. Downloaded Nov. 2, 2011 from http://fcit.usf.edu/assessment/selected/responseb.html

Hoepfl, Marie C. (1994) Developing and evaluating multiple-choice tests. The Technology Teacher, April 1994, pp. 25-26

McCowan, Richard C., Developing multiple-choice tests; tips and techniques: 1999 Research Foundation of State University of New York (SUNY)/Center for Development of Human Services. Downloaded Nov. 2, 2011 from http://www.eric.ed.gov/PDFS/ED501714.pdf

Malamed, Connie. Ten rules for writing multiple-choice questions. Downloaded Sept. 30, 2011 from: http://theelearningcoach.com/elearning_design/rules-for-multiple-choice-questions/

Runté, Robert, How to write tests. 2001 Faculty of Education, University of Lethbridge, U.K. Downloaded from http://www.uleth.ca/edu/runte/tests/

Runté, Robert, Item analysis without complicated statistics. 2001 Faculty of Education, University of Lethbridge, U.K. Downloaded from http://www.uleth.ca/edu/runte/tests/iteman/interp/interp.html

Writing and taking multiple-choice questions. Downloaded Nov. 2, 2011 from http://homepage.mac.com/astronomyteacher/dvhs/pdfs/multiplechoice.pdf