As more learning and training is delivered online, instructors and trainers no longer have the opportunity to get to know all their learners by meeting them face-to-face. This disconnect often leads to missing or ineffective assessments as eLearning developers increasingly hope their learners understand the content but lack the tools to adequately measure the level of understanding. How do you know if your eLearning assessments are effective? Are you testing learners’ content knowledge or do your assessments measure their deduction skills? 

In this session we'll explore common test-writing pitfalls and discover how smart test-takers excel without learning the content. We will review high-quality, content-driven assessments and have the opportunity to experience poorly written test items from the point of view of the test taker. You'll learn multiple-choice question-writing “rules” such as including plausible distractors and how to recognize common test-item pitfalls, like convergence;. You'll find out how smart test-takers use grammatical cues and distractor length to make educated guesses. Combine all of these best practices, and take your eLearning assessments to the next level!


Session Video