According to an ATD study on evaluating learning, organizations reported that only 58 percent of technology-based learning programs are measured at Level 1, 51 percent at Level 2, and only 18 percent at Level 3. This means that more than 80 percent of virtual learning programs are lacking post-event success measures. Most organizations (at least 86 percent) are investing in virtual learning, but only a few are finding out if these programs are adding value to the business. It could be due to lack of resources, time, knowledge in how to capture these results, or many other reasons. In my experience, it's often because evaluation decisions are either left up to chance or left until after the program has rolled out. Most organizations say they want to have effective virtual training, but without an evaluation strategy they may or may not have success.

In this interactive session, you will learn nine techniques to evaluate results of your virtual training programs. Each technique will include an overview of the approach, a short 'real world' example of an organization who uses that strategy, and practical tips on how to apply it to your virtual programs. You'll get an overview of five different evaluation strategies (D. Kirkpatrick's 4 levels, J. Kirkpatrick's ROE, Brinkerhoff's Success Case, Thalheimer's LTEM, and Phillips' ROI) that could be used in your approach. Then, you will learn the nine specific, practical things that you can do to measure the success of virtual training. These things include what to do before a virtual class starts, things to do during the class, and three follow-up actions to take. You'll leave with an evaluation strategy planning tool, along with at least nine practical things to do in your next virtual program.


Session Video