Your cart is currently empty!

The Thing about Multiple-Choice Tests …

You can easily psyche out a multiple-choice test,right? I mean, based on the fact that you’re a developer of online instruction,I assume that part of the reason you reached your current position is thatyou’re a good test taker.
Now you’re on the other end of the spectrum,designing multiple-choice questions and hoping your learners’ answer choiceswill be based on their actual knowledge and not their clever testmanship. Sothe challenge in writing these items is to create a valid measure of thelearner’s knowledge. We don’t want learners to get clues from a question’sconstruction vs. its content, nor do we want to resort to trickery as the wayto increase difficulty.
In educational settings, instructors of onlinelearning courses often have the ability to assign written papers, that is, constructedresponse tests. But in the corporate and organizational training world,online courses are usually stand-alone and thus depend on selected responseitems that can be graded automatically, such as true/false, multiple-choice, andmatching questions. Of these, multiple-choice questions are most often used.Here are some common things we expect multiple-choice questions to do; we’llexamine them one by one:
- Be a true and fair test of the knowledge
- Don’t give away the right answer
- Don’t resort to trickery by trying to make aquestion harder
- Minimize students’ ability to cheat
- Provide ample feedback and remediation
- (Nice to have) Use variables forbehind-the-scenes calculations, feedback, branching, and record-keeping
- Enable us to analyze how questions performed
A true and fairtest of the knowledge
The thing about multiple-choice questions is thatthe answer is right there on the screen. So the challenge as question-writersis to construct the question and its answer choices in such a way that thelearner really has to master the objective in order to select the correctchoice.
Well-written multiple-choice items should havethese preferred-practice attributes:
- Most of the wording should be in the questionstem.
- Answer choices should be brief and parallel.
- Each question should address a single topic.
- Use three to five answer choices, with fourbeing standard. Of those:
- One choice should be the unambiguous correctanswer.
- One choice should be almost correct. The intentis to distinguish between those who truly know the content from those whoseknowledge is more superficial.
- A third choice can be like the previous one, orit can be less correct but sound plausible to the uninformed.
- One choice should be clearly wrong (but in thesame context).
- If you occasionally use a 5th choice,then it’s a give-away if you make that the correct answer. (See item analysis,below.)
- Research has shown that many instructors writemultiple-choice items only at the recall level of knowledge. See my otherarticle on writing them for higher-level thinking skills.
- Use a simple sentence structure so the learnerdoesn’t have to guess what you’re asking.
- If you want the question to be more difficult,do it with distractors that are carefully crafted around the knowledge itself,not with tricky wording or unclear meaning.
- Avoid double negatives.
- Keep a question’s answer choices approximatelythe same length. If there’s a single long choice it is usually the correctanswer because it is full of qualifiers.
- Does it surprise you that choices C and D arethe most common place for the correct answer? Mix it up. And in doing so,especially if you’ll rely on automatic randomization, be sure the choices makesense in any order.
Multiple-choice questions aren’t only useful fortesting purposes. I often use them as a way to increase the density of theinstruction, especially where I expect learners have had some prior exposure tothe content. By asking embedded questions early you can inject a littlesuspense, plus deliver some of the content in the feedback for increasedefficiency.
Multiple-choice questions are also goodself-check devices, especially if your authoring system or learning managementsystem (LMS) allows tailored feedback at the answer level and not just theoverall question level. Speaking of self-checks, see how you do on a fewtypical questions. (Remember what I said about psyching out tests? I’m guessingyou can score 100% without any instruction on the content.)
Give-aways. See the next sections foranswers.
- Which one istrue of a free market?
- Governments intervene
- Governments setprices
- Governments makeproduction decisions
- Supply and prices balancebased on scarcity and desires
- Economicsstudies which of the following concepts?
- Consumption decisions
- Production technology
- How a society decideswhat and how it will produce its goods and services, how it will distributethem, and for whom
- How to govern
- What math wordthat comes from the Latin word for “to put out” is defined as a rate of changethat increases over time?
- Algorithmic
- Geometric
- Exponential
- Fractions
- What is the name for the sharp tooth in mammals, such as dogs, that are goodfor tearing meat?
- Incisors
- Canines
- Cutters
- Bovines
Answers
Question 1: D
The first three choices sound anything but “free.”
Also, the most common location for the correctanswer is choice C or D.
Question 2: C
Note all the qualifiers in choice C – it’sreally a definition; a dead give-away. Remember to keep answer choices paralleland about the same length.
Question 3: C
Is this question meant to measure my knowledgeof Latin, or the definition of “exponential”? As you can see in this example,it is difficult to do both in one question. In this question it really doesn’tmatter if I know the Latin root or not; it’s not a hint to the correct answer.If you really want to assess that knowledge then use a separate question just forthat. Each question should test a singleparcel of knowledge unless it is intentionally testing more complex concepts orapplications.
Note the principal distractor, “Geometric,“ themost likely term people confuse with exponential.
Question 4: B
Learners will thank you for the dog/canine hint,but why give away the answer?
Minimize students’ability to cheat
Some students may feel compelled or otherwisetempted to cheat on tests. This can occur at two levels.
Level 1: Who is the test taker? Make sure thetest-taker is who they say they are. Recently in the United States atest-for-hire scheme was uncovered on college placement exams. It turns outthere was an easily exploited flaw in the personal identification process.
Level 2: Thwarting individual cheating. In myexperience, LMSs don’t provide much help in the question design process.However, a LMS can greatly increase test security by randomizing the sequenceof questions or choices within questions.
Let me add a word here about authoring systemand/or LMS/CMS capabilities (Learning or Content Management System). What Iwill describe in the next few paragraphs are features that are very useful fromtest security and management perspectives. But each LMS is different, so youwill have to know the capabilities of your particular system, including whetheryou can customize standard reports.
Now let’s get back to test security. If your LMScan randomize the sequence of questions or, better yet, draw a stipulatednumber of questions from a larger question bank, then each student gets aslightly different yet equitable test. It is desirable to cover each objectiveno matter which questions the LMS draws. Many LMSs organize questions byobjectives or groups and then draw a sample from each group.
Randomization has a couple of implications. If youlike to give students confidence-building questions early in a test to helpthem overcome test anxiety, you may or may not retain this capability withrandomized selection. With randomization, it is also possible that one questioncould give away the answer to another if it comes first.
You may also be able to randomize the sequence inwhich the LMS lists answer choices within each question. Here again the reasonis to preclude the usefulness of “cheat sheets.” Think through whatrandomization means. Do you like to use the phrase, “All of the above”? Buzzer sound here: that won’t work withrandom sequencing. I don’t like that phrase anyway: it’s often a give-away.
Even though the subject of this article is multiple-choicequestions, I want to mention one thing about matching questions. In many waysthey follow similar rules to multiple-choice question design. I’ve noticed mostauthoring systems and LMSs treat matching questions differently than I wouldprefer. Instructionally I like to have an unequal number of choices inthe groups that are being matched. This way the learner cannot get the last oneor two pairs correct merely by process of elimination. With an uneven numberthere will always be one left over. But some LMSs only offer equal size lists.Sigh.
Provide amplefeedback and remediation
Unless a test is purely for assessment and the onlyfeedback will be the student’s score, I always like to include instructivefeedback for each question. The amount depends on the purpose of the test andthe difficulty of the topic. Is this job training where we want employees totruly master each objective? Then we’d better catch misconceptions andremediate at that famous “teachable moment.”
Sometimes it is sufficient to give feedback at thequestion level. But I often like to give specific feedback for each answerchoice, even more so when I use multiple-choice questions for embedded self-checks.I want to clear up any misunderstandings before the learner proceeds.
Use variables, branching and logic
Sometimes we may want to elaborate more than afeedback block will permit, or even loop the learner back through the module ortake them to an alternative description with more examples. This means we needbranching and logic, and in its more exotic forms it may require variables andlogic. We have said it before: it all depends on your authoring system andLMS/CMS capabilities.
Item analysis
You have put all this care into the design of yourquestions. How did those questions perform? Were the range and average scoresabout what you expected? Did your carefully constructed distractors help youdiscriminate those who know the material well from those who don’t? If aquestion was missed several times, was it missed by those with lower scores asyou would expect, or did some of the high scorers also miss it? Which choicesdid learners select?
The answers to these questions complete the circle.You have designed the content and delivery of the training, and you carefullyconstructed the test (or quizzes or embedded self-checks). The item analysisreport can now tell you not only if learners “got it” – scores alone can tellyou that part – but also give you insight into what they were thinking.
It’s wonderful if your LMS can produce itemanalysis reports, even more so if you can tailor the output to serve your needsprecisely. However, in the absence of such automated reports you can performitem analysis with just a pen and paper as long as you can determine whichanswers each student selected. You may have to drill down into detailed recordsone by one (I hope the data will be more accessible than that), but assumingyou can find the raw data, all you have to do is tally the number of times eachanswer choice was selected for each question.
Take a look at the sample report in table below. Whatcan you infer about these eight questions?

Here are some of the things I see in this report;you may see other details:
- Looking at column 3 (“% Correct Whole Group”),we see that three questions scored rather low; #5 was only answered correctlyby 40% of this group of students (8 out of 20).
- Looking at the responses for #5, we see thateveryone who missed it selected the same answer choice, C. (The correct choicefor each question appears in bold.) Furthermore, it was missed somewhat equallyby students in all three scoring ranges (overall, top third, and bottom third).It appears that either the question is misleading in a consistent way, or theinstruction was lacking. This report helps us see where we may need furtheranalysis.
- Remember what we said earlier about questionswith five choices? We see it graphically here: #1 and #7 are the only questionswith five choices, and the 5th choice is the correct answer in bothcases. Recommendation: Try harder to have four distinct answers.
- What about question #4? Those who missed itchose the full range of wrong answers. This is what you would expect,especially if it’s a difficult question to start with. We’re each unique andmay interpret complex concepts in a variety of ways.
- Question #1 did a good job of finding thoselearners who knew the material except perhaps for one important nuance. (Iassume choice D is the distractor that is close in meaning to correct answerE.)
- Look at the % Correct columns. First, your LMSmay or may not compile this type of data. (As a side note, this may be an acidtest of whether trainers or data base specialists designed the LMS, along withthe feature of having unequal lists for matching questions.) You would expectonly occasional misses in the top-third column and more frequent misses in thecolumn for those scoring in the bottom third. Question #5 again stands outbecause so many in the top third missed it.
Conclusion
I have presented a brief review of designprinciples for multiple-choice questions along with ways to use your authoringsystem and/or LMS/CMS to help manage test security and analyze how thequestions themselves performed. You can find much more on these topics withquick Web searches. Use the references in this article as a starting point ifyou wish. And be sure you know the full capabilities of your LMS.
References
Classroomassessment. Online course developed by Pinellas School District and the FloridaCenter for Instructional Technology at USF. Downloaded Nov. 2, 2011 from https://fcit.usf.edu/assessment/selected/responseb.html
Hoepfl, Marie C.(1994) Developing and evaluating multiple-choice tests. The Technology Teacher, April 1994, pp. 25-26
McCowan, RichardC., Developing multiple-choice tests; tips and techniques: 1999 Research Foundation of State University of New York(SUNY)/Center for Development of Human Services. Downloaded Nov. 2, 2011 fromhttps://www.eric.ed.gov/PDFS/ED501714.pdf
Malamed, Connie.Ten rules for writing multiple-choicequestions. Downloaded Sept. 30, 2011 from: https://theelearningcoach.com/elearning_design/rules-for-multiple-choice-questions/
Runté, Robert, How to write tests. 2001 Faculty ofEducation, University of Lethbridge, U.K. Downloaded from https://www.uleth.ca/edu/runte/tests/
Runté, Robert, Itemanalysis without complicated statistics. 2001 Faculty of Education, Universityof Lethbridge, U.K. Downloaded from https://www.uleth.ca/edu/runte/tests/iteman/interp/interp.html
Writing and taking multiple-choice questions.Downloaded Nov. 2, 2011 from https://homepage.mac.com/astronomyteacher/dvhs/pdfs/multiplechoice.pdf



