Impact measurement too often comes as an afterthought, or worse, not at all. Tools and techniques used to measure the impact of learning are inconsistent and variably applied. As a result, organizations often miss out on insights that might advance their thinking, increase efficiency, and drive better results. How can an organization or function design and implement an impact assessment system that brings more rigor, regularity, and reason to evaluating program success and generates meaningful insights? What considerations drive the approach? How do you determine what is "good enough"?

In this session, you'll start by taking a step back to explore the purpose—or purposes—of measuring the impact of your learning program. You'll learn how an impact assessment system can be used throughout the full lifecycle of a learning program. You'll next explore (and add to) a framework and taxonomy of assessment methods. Using that taxonomy of methods, you'll learn how to apply a structured process to define an appropriate impact assessment system for a given learning program. We'll close by discussing methods for standardization across programs and defining a "good enough" approach. You'll leave with practical "rules of thumb" to keep at hand when designing your impact assessment system.

In this session, you will learn:

  • Common uses of impact assessment data and implications for design
  • How to apply impact assessment throughout a learning program lifecycle
  • How to categorize impact assessment methods and a taxonomy of common methods
  • How to use a structured process to design an impact assessment system
  • What to consider when standardizing your approach across a portfolio or function
  • Top tips for designing impact assessment systems