Instructional design involves significant pre-planning, analysis, and revision. However, once the product is complete, designers rarely have the opportunity to evaluate how the design impacted the learners. Currently, design evaluations are not standardized throughout the industry; institutions establish their own protocols for evaluation. The post-production evaluation stage may be the least used but most important stage of development. Recent research indicates industry standards for instructional design are emerging. While learner experiences and industry standards should be considered to improve and strengthen design, solely relying on these tools may overlook a valid piece of evaluation—the learner outcomes. This is the piece of the equation that is often quite informative in design, but requires post-learning analysis. Being deliberate about incorporating this step is an easy addition to most instructional repertoires, and one that can deliver valuable insight into what is already working and where there are opportunities for improving learning.

Start with goals

Popular instructional design models include identifying goals or desired outcomes from the learning experience. Purpose should be the driver for every development. Bloom’s Taxonomy provides a framework for creating clear, learner-centered objectives for the learning experience. Starting with goals allows for the course to be built in an intentional and logical sequence that scaffolds learning. With the goals in mind the rest of the development shall follow.

Identify the gaps

Instructional design models incorporate evaluation in an iterative process occurring throughout a development. Although design models provide a framework of development stages which include evaluation, effective evaluation methods for design is not defined. This is where Bloom’s Taxonomy lends itself to the evaluation phase of development. This methodology aims to compare the learning outcome intended in the design with the learning artifact produced by the learner. Looking at the two side by side can help to identify gaps in the design and opportunities to target specific course elements for redevelopment. IDs can engage in this gap analysis by creating a simple, yet powerful Likert scale.

[Editor's note: A learning artifact is tangible evidence of learner outcomes—what someone learned or experienced—coursework, tests and quizzes, projects, presentations given, performance of real or simulated tasks, media produced, etc.]

The first step is to identify the level of thinking required by the learning task using Bloom’s Taxonomy. While this may seem to be an easy step, there are often differences of opinion that emerge when levels of thinking are assessed. For this reason, calibration is a key step in the process. If you invest time in calibration at the outset of the process, you will save time at the end of the process. Calibration can be accomplished by reviewing Bloom’s levels of thinking, examining tasks, assessing levels of cognition, and having follow-up conversations about why you selected the levels you did. Once your team has calibrated, you can move on to identifying the level of thinking required by the learning task(s), and begin your analysis. Designers should first analyze the level of cognition present in the artifacts the learners have submitted for the task. This is ideally done as an individual assessment; then the group comes together to discuss their results. By using this method, all designers’ opinions of the activity are heard and any discrepancies can be talked through by the group. Once the group reaches consensus, the level of thinking displayed by the learners should be tallied, recorded, and compared to the level of thinking intended in the course design.

Close the feedback loop

Once data is gathered, it should be analyzed. In this step, designers look at the comparative data. Are the learners displaying mastery of the content at the level of thinking that was intended? If they are, what elements of the design may have impacted this success? Although every designer wants his or her course to produce the intended outcomes, there is often more to learn from instances where the desired outcomes were not achieved by the learners. If the learners are not displaying the level of thinking, what could be added or removed in order to facilitate success? How do these instances of design compare with successful instances?

Impact

After data analysis is completed, it is time for making an impact. Information can be aggregated and shared with other IDs, research questions can be formulated, and changes can be made to courses or learning assets. Regardless of what action is taken, the biggest impact this practice will have on your team is instilling the process of continuous improvement, which will not only inform the thinking of everyone who participates, but also empower you with the knowledge and self-efficacy that comes from evaluating your end product, closing your gaps, and capitalizing on your strengths.