Your cart is currently empty!

Research Spotlight: Writing Assessments to Validate the Impact of Learning

Not many peoplereally like writing assessments of any kind, whether the end product is a test,quiz, or complex certification examination. In fact, I have alwaysthought of learning assessment as one of the toughest challenges facing learningand development (L&D) practitioners.
Today’s L&D practitioners are typically not statisticians,assessment specialists, standardized test writers, or learning psychologists. Theirresponsibilities are more likely to be broader and deeper than that, becausethey have an overall responsibility to prepare their learners for the realitiesof working in a technology-mediated, information-rich, and increasinglycollaborative workplace.
The need for practicalassessment tools and resources
Here’s the point: We need to provide assessment writingtools and resources for members of the learning and development profession whoare not assessment specialists, butstill want and require specialized guidance in this increasingly important skillarea. Providing those practical tools, templates,professional perspectives, and resources for creating today’s learningassessments is the goal of our latest eBook Writing Assessments to Validate the Impact of Learning.
Edited by experiencedlearning practitioner and assessment expert, A.D. Detrick, our eBook beginswith Jane Bozarth’s insightful introduction and then goes on to present currentperspectives from several industry thought leaders, including Mike Dickinsonand Marc Rosenberg. We also provide usable guidelines, downloadable templates,assessment websites, and other practical resources for all aspects of hands-onlearning assessment. These include in-depth references, an annotatedbibliography of the Guild’sassessment resources for further reading, and a glossary of terms for those newto the specialty field of learning assessment.
Selecting the best typeof assessment item
Table 1 shows anexample of these detailed guidelines. Of these six question typesdescribed below, “multiple choice” is often the preferred item type for mostcognitive tests because these types of test items are able to assess most ofBloom’s cognitive categories, and can also be quickly and reliably scored by anindividual or by a machine. Bloom’s Taxonomy and the newer “DigitalTaxonomy” are critically-important assessment tools. We provide detailedinformation in the eBook about the cognitive categories within the taxonomy aswell as additional templates and web resources.
Question Type | Advantages | Disadvantages |
True/False questions |
|
|
Matching questions |
|
|
Multiple choice questions |
|
|
Fill-in-the-blank questions |
|
|
Short answer questions |
|
|
Essay questions |
|
|
Source: The eLearning Guild Research,2016.
Best practices forwriting assessment items
Writing individual assessmentitems, regardless of the type you choose, can often be the most daunting partof the process. Table 2 is an abbreviated list of best practices includedin the eBook. These ensure that your questions are as effective as possible,and apply to all questions regardless of type.
Best Practice | Question Guidelines |
Write effective questions |
|
Review questions |
|
Review questions |
|
Source: The eLearning Guild Research,2016.
Looking to the futureof learning assessment
In addition to these practical resources, we also thought itimportant to look at the “future of assessment” practices and issues. Here is abrief sampling of what our contributors (including myself) said. As you willsee, we often paint a less than enthusiastic picture of “where assessment isgoing” in the future.
“I’ve long warned of my concerns about so much ‘evaluation byautopsy’: our tendency to assess at the end of training or another learningintervention. We offer a smile sheet at the end of the day. We offer amultiple-choice quiz at the end of a module. The learning management systemspits out reports about number of completions and average time to complete andaverage quiz scores … [But] none of that really tells us much about how well alearner can apply new learning, nor do we find out much about how to fix acourse that isn’t working well. The future is bringing new tools to help usassess—and respond—as needs emerge or conditions evolve. There will be aquantum shift in the needs assessment phase—the beginning, not the end—of ourwork.” (Jane Bozarth)
“Probably the biggest irony of discussing the futureof learning assessment is that the future looks so much like the past. Rather,the future looks very much like the past we should’ve been implementing allalong. Indeed, technology has provided us with new and exciting ways to deliverassessments and capture assessment data, but the future remains relativelyunchanged. We need to create valid, reliable, assessments that provide somecertainty that the knowledge was a product of learning—and not guessing orprior knowledge. Then we need to use these measurements for the performancemanagement of our learners as well as our own internal learning and developmentefforts.” (A.D. Detrick)
“My hope forthe future is that instructors and trainers will view assessments as anintegral part of the learning process, not an isolated side step, especiallynot one that focuses on lower levels of learning, such as terminology andtaxonomies. I think that, too often, writers of paper-and-pencil tests (whethergiven on actual paper or on a computer screen) are lured by those very mediainto writing simple multiple-choice and true/false assessments, when moreauthentic assessment is within reach… So for the future of assessments, my hopeis that we will continually challenge ourselves to go beyond the lower levelsof learning when we write assessments, and that we will not be restrained bythe media at hand (e.g., multiple-choice questions) from finding imaginative,more authentic ways to measure students’ learning.” (Mike Dickinson)
“If we want learning assessment to have a future, here arethree suggestions. First, take some money away from your instructional designbudget and build your organization’s evaluation expertise. You may producefewer courses, but what you do build might have a shot at actuallydemonstrating real performance improvement. Second, move compliance trainingaway from measures of attendance and completion to better measures of actualperformance (a hard slog, I know). And finally, put your clients and customersin charge of evaluation by letting themtell you what constitutes success andthen, together, you measure it … or not, and see what happens.” (MarcRosenberg)
“We all hearabout the need for assessments to produce better and more precise data. The most important use of these assessment data (as we’retold) is to isolate the preciseimpact of training interventions on business outcomes. This is an increasinglytall order for most learning organizations… [Instead] we need to focus our futureassessment efforts on gleaning actionable and practical assessment data ratherthan producing an ever-growing morass of precise data points that try toconnect a single training event to a change in business metrics, such as salesor costs. At the end of the day, by focusing on actionable assessment data, we’llsave ourselves from an exhausting waste of effort and resources because—in mostcases—precise assessment data aren’t required to produce practical, feasible, repeatable,and actionable results.” (Sharon Vipond)
Finding the pathway tobetter and more effective learning assessment
To conclude, Ibelieve that our contributing editor, A.D. Detrick, has said it best: “For fartoo long, we have relegated assessments to an afterthought in the design ofcourses. Every instructional model that includes assessments will start withthe assessment and build the course design from that, but I have seen very fewcourses designed that way. Instead, assessments are assembled at the lastminute, and their design is primarily informed by an insufficient allotment oftime. The job of writing the questions is often left to subject matter expertswho have knowledge of the content, but (usually) no experience writingassessment questions. Worst of all, the analysis of the results is oftencompletely absent. This is a common dysfunctional cycle, in which it feelsuseless to measure something that was poorly designed, and hard to properlydesign something that won’t be effectively measured… The purpose of this eBookis to compile enough information to provide a pathway that would allow anyoneto write [effective] assessment questions, regardless of experience or role.”
To emphasizewhat A.D. Detrick and all of our contributors are saying: If followed properly,these guidelines and resources can help you find a better “pathway” to learningassessment. Avoid the pitfalls that often compromise today’s learningassessments with these immediate, practical steps that help break the dysfunctionalcycle of bad measurement and bad assessment design.
References
Shrock, Sharon A. and William C. Coscarelli, Criterion-Referenced Test Development: Technical and Legal Guidelinesfor Corporate Training, New York, NY: John Wiley & Sons, 2008.


