I admit it. One of the reasons I love working in e-Learning is that I love playing with all the technologies, learning how they work, and seeing how people interact with them. When new technologies arise and gain popularity, puzzling out how we can use each one to enhance learning is a great excuse to dive in. And, as we all know, analyzing a tool is a necessary prequel to incorporating the tool into a curriculum. The primary focus of e-Learning, after all, is learning.

Editor’s Note: Parts of this article may not format well on smartphones and smaller mobile devices. We recommend viewing on larger screens.

Keeping that focus is important. In a comparative review of experiments contrasting face-to-face and online learning conditions released by the U.S. Department of Education last year, analysts examined 51 experiments conducted between 1996 and 2008. They found that students in online learning conditions performed better than those who received face-to-face instruction. Students who participated in blended learning conditions (using both online and face-to-face learning strategies) performed the best overall.

If the findings ended there, the study would be a clear endorsement for using online or blended teaching methods for most training problems, but the comparison also indicated that students in online and blended learning conditions spent more time on-task. Because more time on-task usually translates into better performance, the authors couldn’t conclude that online or blended learning strategies yield better learning in and of themselves. The implication for learning professionals is clear: if a trainer wants to use technology to make a difference in learners’ training outcomes, the way to do it is through thoughtful application of the technologies involved.

A three-step look at new instructional technologies

My own approach to examining new instructional technologies has three steps. First, I examine the tool itself. If, like most Learning 2.0 tools, it can be used in non-instructional settings, I like to think about how people use it in those other settings. How does it allow users to access or interact with information? How do people use it to interact with one another?

In the second part of the process, evaluating a new tool means taking a fresh look at instructional theory and thinking about how I could use the tool to accomplish the recommended tasks. The instructional theory provides a framework to help explore the tool’s uses and limitations. I could make up a framework of my own, but since I know that my usual bias is in favor of the tool, using an already-established framework keeps me honest and focused on its instructional uses. A side benefit of using someone else’s framework is that it can help me think of uses for a tool that might not have occurred to me otherwise.

Gagne’s theory of instruction is the one that resonates best for me and most often fits my instructional goals. If I wanted to evaluate whether to incorporate a tool like Twitter into my instructional strategy, I’d check to see whether it could support any of the nine events of instruction. Table 1 may help make this clearer, if you aren’t familiar with Gagne’s Events of Instruction.


Table 1. Evaluating Twitter against Gagne's Events of Instruction

Internal Process

Instructional Event


Could it be done effectively using Twitter?


1. Gaining attention

Use abrupt stimulus change.



2. Informing learners of the objective

Tell learners what they will be able to do after learning.

Retrieval to working memory

3. Stimulating recall of prior learning

Ask for recall of previously learned knowledge or skills.



Selective Perception

4. Presenting the content

Display the content with distinctive features.

Semantic encoding

5. Providing “learning guidance”

Suggest a meaningful organization.


6. Eliciting performance

Ask learner to perform.


7. Providing feedback

Give informative feedback.


Retrieval and reinforcement

8. Assessing performance

Require additional learner performance with feedback.

Retrieval and generalization

9. Enhancing retention

Provide varied practice and spaced reviews.


As I check through the list, I can’t quite imagine a scenario where I could use Twitter to gain the attention of my learners, and I’m certain I don’t see it as an effective way to inform learners about training objectives. On the other hand, I can imagine using Twitter as a medium to poll students in an instructor-led training on whether they remember information from a previous class or assignment. I can also imagine using it to give quick feedback to a student or to encourage students to offer feedback to one another. That said, it seems to me to have some potential as a supplementary instructional tool.

In my last step, I think about the expenses associated with the tool. Whether you calculate it in terms of how much money or time a tool is going to cost, the more expensive a tool is, the more it will need to offer in terms of instructional support. Authoring tools, for instance, can be expensive to buy and take a significant amount of time to use, but a good tool can make up for its costs with enough versatility. A tool that affords me a way to address seven of the instructional events effectively is worth more of my time and budget than a tool that can only address one. A free tool that doesn’t address any of the events is probably a distraction.

Just as stating objectives at the beginning of a course helps learners focus on the instructional goals of the course, instructional theories help focus learning professionals on their training goals. Using them helps ensure that when you apply technology, that technology makes the training better.


U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies, Washington, D.C., 2009.

Driscoll, M. (2000). Psychology of learning for instruction (2nd edition). NH, Massachusetts: Allyn & Bacon.