Research Spotlight: Writing Assessments to Validate the Impact of Learning

Not many peoplereally like writing assessments of any kind, whether the end product is a test,quiz, or complex certification examination. In fact, I have alwaysthought of learning assessment as one of the toughest challenges facing learningand development (L&D) practitioners.

Today’s L&D practitioners are typically not statisticians,assessment specialists, standardized test writers, or learning psychologists. Theirresponsibilities are more likely to be broader and deeper than that, becausethey have an overall responsibility to prepare their learners for the realitiesof working in a technology-mediated, information-rich, and increasinglycollaborative workplace.

The need for practicalassessment tools and resources

Here’s the point: We need to provide assessment writingtools and resources for members of the learning and development profession whoare not assessment specialists, butstill want and require specialized guidance in this increasingly important skillarea. Providing those practical tools, templates,professional perspectives, and resources for creating today’s learningassessments is the goal of our latest eBook Writing Assessments to Validate the Impact of Learning.

Edited by experiencedlearning practitioner and assessment expert, A.D. Detrick, our eBook beginswith Jane Bozarth’s insightful introduction and then goes on to present currentperspectives from several industry thought leaders, including Mike Dickinsonand Marc Rosenberg. We also provide usable guidelines, downloadable templates,assessment websites, and other practical resources for all aspects of hands-onlearning assessment. These include in-depth references, an annotatedbibliography of the Guild’sassessment resources for further reading, and a glossary of terms for those newto the specialty field of learning assessment.

Selecting the best typeof assessment item

Table 1 shows anexample of these detailed guidelines. Of these six question typesdescribed below, “multiple choice” is often the preferred item type for mostcognitive tests because these types of test items are able to assess most ofBloom’s cognitive categories, and can also be quickly and reliably scored by anindividual or by a machine. Bloom’s Taxonomy and the newer “DigitalTaxonomy” are critically-important assessment tools. We provide detailedinformation in the eBook about the cognitive categories within the taxonomy aswell as additional templates and web resources.

Table 1:Advantages and disadvantages for different types of questions

Question Type

Advantages

Disadvantages

True/False questions

  • Can be used to quickly assess multiple objectives
  • Very easy to write questions
  • Easy to grade/score
  • Can only be used onthe lowest cognitive categories
    (Remember, Understand)
  • Easy to guesscorrectly
  • Almost impossible todetermine reliability

Matching questions

  • Can also be used to quickly assess multiple items in a minimum of space
  • Good for an environment that requires a large amount of recall of facts
  • Only useful on thelowest cognitive categories
    (Remember, Understand)
  • Can be confusing andtime-consuming for learners

Multiple choice questions

  • Can be used to assess most cognitive categories (Remember, Understand, Apply, Analyze, Evaluate)
  • Used to assess a learner’s ability to integrate information
  • Used to diagnose a learner’s difficulty with certain concepts
  • Can provide learners with immediate feedback about why distractors were wrong and why correct answers were right
  • Can cover a wide range of difficulty levels
  • Usually requires less time for learners to answer
  • Usually easily scored and graded
  • Does not allow learners to demonstrate knowledge beyond theoptions provided
  • Requires a great deal of time to construct effectivemultiple choice questions, especially ones that test higher levels of learning
  • Encourages guessing because one option is always right
  • Test takers maymisinterpret questions

Fill-in-the-blank questions

  • Can be used to assess most cognitive categories (Remember, Understand, Apply, Analyze, Evaluate)
  • Requires learner to know the answer, not recognize the answer
  • Very difficult to write questions for the highercognitive categories
  • Automated grading canbe difficult

Short answer questions

  • Can be used to assess most cognitive categories (Remember, Understand, Apply, Analyze, Evaluate)
  • Requires learner to know the answer, not recognize the answer
  • Easy to write questions, as there are no distractors to create
  • Need to make sure there is only one correctanswer
  • Scoring can be difficult and/or time-consuming
  • May encouragememorization instead of learning

Essay questions

  • Used on the highest cognitive categories (Analyze, Evaluate, Create)
  • Allows for a greater context for answers
  • Answers are more realistic and generalized
  • Can provide a more realistic and generalizable task for test
  • Usually takes less time to construct
  • Extremely difficult for test takers to guess the correct answer
  • Requires much more time to answer
  • Answers are only as good as the learner’swriting skills
  • Grading is more subjective; non-test relatedinformation may influence scoring process
  • Requires extensive effort to be graded in anobjective manner
  • Requires more time tograde


Source: The eLearning Guild Research,2016.

Best practices forwriting assessment items

Writing individual assessmentitems, regardless of the type you choose, can often be the most daunting partof the process. Table 2 is an abbreviated list of best practices includedin the eBook. These ensure that your questions are as effective as possible,and apply to all questions regardless of type.

Table 2:Best practices for writing effective questions

Best Practice

Question Guidelines

Write effective questions

  • Focus only on one thought, problem, or idea in each question.
  • Keep questions independent. Do not refer to any part of any other question.
  • Make sure the question addresses a very specific problem.
  • Make sure the question does not make any assumptions or rely on any context outside of the question; the question should be able to stand alone and provide all information needed to answer it.
    • Place critical and descriptive material early in the question.
    • Ensure the question includes all information needed to answer the question.
  • If the question includes a graphic, refer to the graphic in the question (i.e., “Identify X in the graphic below”).
  • Write questions in positive form. Avoid using words like NOT and EXCEPT in the question stem (i.e., “Which of these situations should NOT include an audit?” should be written “When should a situation include an audit?”).
  • Include the selection criteria in the question if the question calls for a judgment (i.e., “if____, then which is the best….?”).
  • Ensure the question tests the learning objective as defined in the curriculum, that it measures the appropriate cognitive level. This will help ensure that the question is neither too easy nor too difficult.
  • Do not instruct or inform in the question.
  • Do not use absolute terms, such as always, all, never, except, none in questions.
  • Do not give clues to the correct answer in the question or answer choices.
  • Do not include questions that give away the answer to another question.
  • Do not trick the participant. Test specific knowledge, not test-taking skills.
  • Do not write multi-variable questions: “How and when would professional skepticism apply?”

Review questions
for clear
and concise writing

  • Use a style guide for consistency.
  • Express complete thoughts.
  • Use active voice in the present tense.
  • Remove all irrelevant or redundant material.
  • Use economy of language (e.g., use “to” rather than “in order to”).
  • Avoid words with multiple meanings.
  • Avoid “window dressing” or superfluous information that isn’t necessary to ask the question and get a response.
  • Always strive for clarity and readability. Make sure that you’re not testing the learner’s reading ability or comprehension ability as well as the specific piece of knowledge for which you’re testing.

Review questions
for correct grammar and punctuation

  • Capitalize the same words the same way every time.
  • Write out terms followed by the acronym in parentheses the first time it appears in a question (i.e., Chief Learning Officer [CLO]), unless the acronym is being tested.
  • Try to write questions in complete sentences instead of questions which become complete sentences when read with an answer option (i.e., “A correctly placed tick mark will____.” should be written as: “Where would a correctly placed tick mark appear?”).


Source: The eLearning Guild Research,2016.

Looking to the futureof learning assessment

In addition to these practical resources, we also thought itimportant to look at the “future of assessment” practices and issues. Here is abrief sampling of what our contributors (including myself) said. As you willsee, we often paint a less than enthusiastic picture of “where assessment isgoing” in the future.

“I’ve long warned of my concerns about so much ‘evaluation byautopsy’: our tendency to assess at the end of training or another learningintervention. We offer a smile sheet at the end of the day. We offer amultiple-choice quiz at the end of a module. The learning management systemspits out reports about number of completions and average time to complete andaverage quiz scores … [But] none of that really tells us much about how well alearner can apply new learning, nor do we find out much about how to fix acourse that isn’t working well. The future is bringing new tools to help usassess—and respond—as needs emerge or conditions evolve. There will be aquantum shift in the needs assessment phase—the beginning, not the end—of ourwork.” (Jane Bozarth)

“Probably the biggest irony of discussing the futureof learning assessment is that the future looks so much like the past. Rather,the future looks very much like the past we should’ve been implementing allalong. Indeed, technology has provided us with new and exciting ways to deliverassessments and capture assessment data, but the future remains relativelyunchanged. We need to create valid, reliable, assessments that provide somecertainty that the knowledge was a product of learning—and not guessing orprior knowledge. Then we need to use these measurements for the performancemanagement of our learners as well as our own internal learning and developmentefforts.” (A.D. Detrick)

“My hope forthe future is that instructors and trainers will view assessments as anintegral part of the learning process, not an isolated side step, especiallynot one that focuses on lower levels of learning, such as terminology andtaxonomies. I think that, too often, writers of paper-and-pencil tests (whethergiven on actual paper or on a computer screen) are lured by those very mediainto writing simple multiple-choice and true/false assessments, when moreauthentic assessment is within reach… So for the future of assessments, my hopeis that we will continually challenge ourselves to go beyond the lower levelsof learning when we write assessments, and that we will not be restrained bythe media at hand (e.g., multiple-choice questions) from finding imaginative,more authentic ways to measure students’ learning.” (Mike Dickinson)

“If we want learning assessment to have a future, here arethree suggestions. First, take some money away from your instructional designbudget and build your organization’s evaluation expertise. You may producefewer courses, but what you do build might have a shot at actuallydemonstrating real performance improvement. Second, move compliance trainingaway from measures of attendance and completion to better measures of actualperformance (a hard slog, I know). And finally, put your clients and customersin charge of evaluation by letting themtell you what constitutes success andthen, together, you measure it … or not, and see what happens.” (MarcRosenberg)

“We all hearabout the need for assessments to produce better and more precise data. The most important use of these assessment data (as we’retold) is to isolate the preciseimpact of training interventions on business outcomes. This is an increasinglytall order for most learning organizations…  [Instead] we need to focus our futureassessment efforts on gleaning actionable and practical assessment data ratherthan producing an ever-growing morass of precise data points that try toconnect a single training event to a change in business metrics, such as salesor costs. At the end of the day, by focusing on actionable assessment data, we’llsave ourselves from an exhausting waste of effort and resources because—in mostcases—precise assessment data aren’t required to produce practical, feasible, repeatable,and actionable results.” (Sharon Vipond)

Finding the pathway tobetter and more effective learning assessment

To conclude, Ibelieve that our contributing editor, A.D. Detrick, has said it best: “For fartoo long, we have relegated assessments to an afterthought in the design ofcourses. Every instructional model that includes assessments will start withthe assessment and build the course design from that, but I have seen very fewcourses designed that way. Instead, assessments are assembled at the lastminute, and their design is primarily informed by an insufficient allotment oftime. The job of writing the questions is often left to subject matter expertswho have knowledge of the content, but (usually) no experience writingassessment questions. Worst of all, the analysis of the results is oftencompletely absent. This is a common dysfunctional cycle, in which it feelsuseless to measure something that was poorly designed, and hard to properlydesign something that won’t be effectively measured… The purpose of this eBookis to compile enough information to provide a pathway that would allow anyoneto write [effective] assessment questions, regardless of experience or role.”

To emphasizewhat A.D. Detrick and all of our contributors are saying: If followed properly,these guidelines and resources can help you find a better “pathway” to learningassessment. Avoid the pitfalls that often compromise today’s learningassessments with these immediate, practical steps that help break the dysfunctionalcycle of bad measurement and bad assessment design.

References

Shrock, Sharon A. and William C. Coscarelli, Criterion-Referenced Test Development: Technical and Legal Guidelinesfor Corporate Training, New York, NY: John Wiley & Sons, 2008.

Share:


Contributor

Topics:

Related