The general way to look at tools and determine how they would meet your requirements has been described as asking, "Do you want it better, faster, or cheaper?" The “better” part of the equation has had a very small place in the picture.

Editor’s Note: Parts of this article may not format well on smartphones and smaller mobile devices. We recommend viewing on larger screens.

Because it’s easier to measure tools in terms of how fast you can create a solution, at what price you can purchase the tool, or how much money you can save using this tool, the “faster” and “cheaper” pieces have long been the standard for business reasons that organizations use. We can save such-and-such an amount in travel costs by converting to e-Learning, and we can develop Web-based training faster by using this rapid tool. 

Most organizations today are developing competency models that have defined proficiency levels for specified jobs and career paths. You can look at these, in conjunction with the methods and tools for e-Learning, and start making better decisions that will help learning to advance business goals.

Competency models and proficiency scales should be defining the performance of the workforce and giving us methods, or at least good clues, as to how we can measure performance. That means that training professionals should have a better opportunity to train toward better performance. In this article, I’m only addressing online learning, but I’m sure the comparisons to other learning environments will be clear.

E-Learning categories

There is a wide variety of e-Learning tools and systems available on the market today – and it could be only the beginning. In one sense all these tools make it easy to go from classroom to Web-based instruction. However, like any building project you will want to make sure that you are using the right tool for the right job. Usually, the ultimate goal for a learning effort is to generate some change in performance.

In a very broad categorization, we have six types of instructional products or settings:

  1. Passive Informational – This is a generally a passive presentation of information. Learners click to advance to each slide or screen, and there is no interaction among learners or a live instructor. There is usually pre-recorded audio.

  2. Basic Instructional – This is also often a screen-by-screen format, which is generally more involved and is comprised of more than 1 module or lesson.

  3. These first two online learning options are arguably very similar in basic construct, and are usually what most people think of when they think of “e-Learning.” They involve no instructor, and are generally a screen-to-screen format. The main difference is in the interaction (or lack thereof). #1 is generally the online “briefing,” that is, simply, a PowerPoint-style slideshow that one may or may not consider “formal” training. #2 will be more complex in that it will conform to a training format that contains multiple lessons and human-computer interactions beyond clicking links and the Next button.

  4. – Live Instructional (also known as the Virtual Classroom) – This is usually either a screen-by-screen presentation or an online demo, but under the control of a live instructor. In this format, learners may be able to post questions to the instructor or to other learners, and even interact virtually with features such as being able to “raise their hand” in class. Audio may be available one-way over the internet, or two-way over a conference call. If the tool used goes beyond just a basic online meeting, students can also be put into small groups in “breakout rooms” to conduct an activity, and training sessions can be recorded for later access.

  5. – Collaborative Learning (sometimes referred to as a Course Management System) – This learning environment is more comparable to the traditional classroom environment than any other. Universities frequently use such a format. Learners can go to the online “school” to interact with the instructor and other learners, get and turn in assignments, work on group projects, etc. Classes are often several days (or weeks) long. An instructor leads the class, and interaction level has the potential to be very high with the use of a variety of synchronous as well as asynchronous tools such as discussion forums and wikis.

  6. Options 3 and 4 offer the ability to work with an instructor during online learning. These can allow for a more effective use of instructors by letting a human instructor teach learners across different locations. The main difference between the two is that #3 is synchronous interaction and #4 offers a blend of synchronous and asynchronous interaction.

  7. – Immersed Simulation – Simulation places a learner in a performance situation as close to the real thing as possible for training purposes. Simulation can be 2-D or 3-D, or even text-based if appropriate to the learning content. Online simulation can be based on isolated tasks such as operating a piece of machinery, or on a more all-encompassing context such as investigating an accident (which can include inspecting the scene, interviewing witnesses and survivors, conducting analyses, etc.).

  8. – Mobile Learning – “M-Learning” is the delivery of learning functions/tools on mobile or hand-held devices. This can include traditional presentations that fall into the passive informational category, or brief Podcasts. It may be a bit of a misnomer, in that, at the highest level, its application can be that of a performance tool rather than a traditional training event.

While there is some degree of progression in terms of what the learning in a given category can accomplish, no category is defined as being globally better than another. This simply provides one way to select the appropriate e-Learning category for the task at hand. In this case, the main connection to selecting the category is by examining the competency model best aligned to the task.

Competencies, proficiency, and learning

The proficiency scale on a competency model provides a broad yet attainable learning goal. Each training and performance situation should always be examined separately, although some general statements can be made about the abilities of the various e-Learning tools and environments and how they can attain the different levels of proficiency for the workforce. This may include delivering training in multiple categories to meet differing needs.

Proficiency scales usually have at least three levels starting with a basic (or even zero) proficiency going up to advanced or expert proficiency. These scales, when applied to a competency model, should provide performance behaviors that indicate someone has achieved a level of proficiency in a given competency or skill. When developing learning for an audience, part of the analysis of that training should include objectives that can be matched to the competency and proficiency level required by that audience.

Meeting proficiency with e-Learning

Assuming that just any e-Learning method (even traditional methods) can be used to meet any competency and proficiency level without such analysis can be detrimental to a learning effort; not all e-Learning is created equal. The categories range from basic Web-based presentations to collaborative, engaging learning events. The widespread availability of information, and the need to access information efficiently as it changes, characterizes the pervasive use of the Web as a resource. However, the business demand for high-level performance and problem-solving skills extends beyond the simple act of effectively retrieving information for mission-critical objectives.

This is where the use of the competency models can link into the analysis of e-Learning. If developed correctly, the business needs of the organization are reflected in a competency model that accounts for functions performed by different occupations. The model should include the proficiency at which those occupations are required to perform, and possibly different levels of occupations (for example, junior/senior, manager/non-manager, etc.) with their performance requirements.

Figure 1 is a basic representation of how the e-Learning categories can map to a proficiency scale. If a given audience has a competency requirement that is fairly low, one of the lower category tools for e-Learning can sufficiently meet that task. On the other hand, if a competency requirement is high, a more robust type of e-Learning should be examined.

 

Figure 1 Proficiency / e-Learning Diagram

 

The e-Learning Category Selection Table

Table 1 can serve as a decision-making tool to help in selection of an e-Learning tool or category. The table categorizes broad e-Learning environments and methods, and explains how and at what proficiency level you can best apply these environments. In this table, the lower level methods of e-Learning generally lean toward information dissemination while the higher level methods incorporate more performance and higher-order thinking.

This is a broad generalization in terms of proficiency scale and learning media since scales can range and learning media have a wide variety of tools available. In most cases, the different capabilities of tools that fall into the categories enable it to span the scale. For example, Category 2, Basic Instructional, may fall into a low level of proficiency if only tools on the left-hand side of the column are used in the training or medium level if tools from the right-hand side are implemented.

Table 1 e-Learning Category Selection
Proficiency Scale Activity Pros/Cons
1 – Passive Informational Low Simple identification assessment such as multiple-choice, matching, or true-false. Quick and easy development for simple knowledge presentation. Asks no or little interaction or performance from learner. Lengthy sessions will “lose” people easily.
  • Visual slide presentation
  • Audio
  • Video
  • Simple knowledge assessment
2 – Basic Instructional  Low/Medium More challenging assessment such as short answer, fill-in-the-blank, or essay where the learner is creating a response rather than identifying. Breaks up the presentation of knowledge with assessment questions and varied activities. Can be longer because learners can leave and return as needed. Lengthier development time. No interaction among learners or instructor.
  • Visual screen presentation
  • Audio
  • Video
  • Simple assessment
  • More complex assessment
  • Multiple modules or lessons
3 – Live Instructional  Low/Medium Live Q&A with instructor and demonstration of performance. Live interaction among learners and instructor. If live demo, instructor can demonstrate functions “on demand” as the audience needs. Should not be lengthy. Learners cannot leave and return to presentation unless viewing a previously recorded session (thus losing interaction)
  • Visual screen presentation
  • Audio
  • Video
  • Live presenter
  • Synchronous Chat
  • Whiteboard
  • Live demo
  • Live learner participation
4 – Collaborative Instructional  Medium/High Complex problem-solving and case study work in collaborative environment with instructor feedback. Promotes team building and problem solving in an online environment. Able to address higher-order thinking skills such as analysis and evaluation. Requires a completely new set of skills for instructors transitioning from live classroom to the online classroom.
  • Internal e-mail
  • Journal
  • Individual assignments
  • Simple assessment
  • Group assignments
  • Case Study
  • Large and small group forums
  • Wikis
  • Learner blogs
  • Web 2.0
5 – Immersed simulation  High Actual performance assessment in simulated environment. Provides opportunity to perform complex tasks and practice skills. Complex development and programming may be required
  • Simulation
  • Performance “Gaming”
  • Performance Assessment
6 – Mobile Learning*  Low/High Either none (if lecture) or support rather than training. Provides real-time, just-in-time support or information. Not generally considered a formal training event. Requires infrastructure support.
  • Internal e-mail
  • Journal
  • Individual assignments
  • Simple assessment

  • Group assignments
  • Case Study
  • Large and small group forums
  • Wikis
  • Learner blogs
  • Web 2.0

 

Emphasis on instructional design

The guidance this table provides does not mean that a presentation tool on the low end of the scale can’t support a higher proficiency for a given skill or competency. With a skill that is fairly limited in scope, and given excellent instructional design, even a simple development tool that largely meets the definition in Category 1 can suffice. Conversely, more complex tools can be underused; something with the robust potential of a course management system (category 4) can be designed to create little more than an online lecture.

There are two main issues that affect the category in which an e-Learning development or delivery tool may best fit. The first is the tool’s ability to support an appropriate level of learner activity or performance to fit the desired proficiency level. On the table, this goes from simple multiple-choice and true-false questions up to an immersed simulated performance assessment. The second is the ability of the tool to achieve complex objectives (such as those on the upper end of a taxonomy) or its ability to support a complex network of objectives (such as a series of supporting, enabling, and terminal objectives). While both of these issues rely on the functionality available in the development tool, they are also both heavily dependent on sound instructional design in the use of the tool. This emphasizes the importance of strong designers and developers applying instructional design principles to well-designed tools.

Performance-Based Design Formula

My favorite formula for performance-based design is the following:

  • Framework
  • Performance
  • Reflection
  • Repeat

It may sound like the instructions on the back of your shampoo bottle, but its simplicity means it can apply to virtually any learning situation and any learning media. The intent of this particular formula is to get training to reach the higher end of the proficiency scale. Table 2 describes how it works.

 

Table 2 Performance-Based Design Formula

Framework

This is the structure and the expectations of the intended performance. In any given training session, this is probably the briefest section. It may include an introduction to how to use information and resources related to the performance, but note that this is not a teaching session or presentation on the content of those resources. This is an important distinction in today's society where information is readily available and easily modified. Information available to us can change regularly, but the work we perform remains much more static. Spending valuable training time learning (or attempting to memorize) the content of an information source that can change tomorrow is a waste when the real value comes from performance.

Performance

In performance-based training the information is not ignored or left out; it is simply learned within the context of the performance rather than in isolation (and reinforced in the next section on Reflection). This means that the "exercise" starts much earlier in the training than it would in an information-based course. This can be frustrating and uncomfortable at times for learners. They are intentionally being thrust into a performance situation for which they may not feel prepared, but that's precisely the point! Learners should be able to face this anxiety in a safe training situation first, and learn from it.

Reflection

This is an extremely important part of learning. It is an opportunity for learners to analyze and evaluate their own performance (and their peers, if applicable) to determine what they would do differently next time. This is the time for them to articulate what they have learned, and most importantly, to learn from their mistakes. Instructors need to coach learners to draw their own conclusions, but it is also the time when instructors can provide some help if those tips help them learn and apply them next time. You'll notice that this is probably the most valuable time for the instructor to provide actual instruction, in that it is now being provided in a meaningful context rather than in isolation.

Repeat

This is the "next time." Instead of being given just one (or few) opportunities to practice the performance, performance-based training affords many chances. Not only should the next time be a new opportunity to apply skills (not a repeat of the same situation), but new performance opportunities may also increase in complexity. For example: the Framework portion may include a new information source or tool, or the Performance portion may introduce new variables.

 

Examples

Now consider how running an analysis for the best tool for performance, and having a performance-based design strategy, impacts your development. The following examples show different ways of using, and even combining, the e-Learning categories to aim for higher proficiency.

 

Example – Education accreditation training, online synchronous/asynchronous (e.g., a Course Management System):

  • Framework – A synchronous discussion allows the instructor to introduce the resources needed, including policies on accreditation, checklists and standards they will need to follow, and instruction on what the expectations are for the first scenario.
  • Performance – Place learners into small working groups in designated online areas with synchronous and asynchronous tools, and direct them to school blueprints, curriculum, teacher qualifications, and administrator qualifications for an elementary school that they will evaluate for accreditation. Learners work independently to study documents (taking any available training tutorial needed), and then work together to create and then upload their evaluation of the school.
  • Reflection – Learners can now review the uploaded evaluations of the other groups to compare their work. The small groups participate in synchronous discussions among themselves to decide what they would do differently, what they liked best or least about other evaluations, and may even work to change and resubmit their own evaluation. The instructor also leads an asynchronous discussion with the entire class about the activity.
  • Repeat – The next performance sessions will have new or more complex situations, such as a private school with children as young as 18-months old, or a state university.

 

Example – Flight Training, blended:

  • Framework – This course is broken into an online portion and a classroom/field portion. The instructor provides an overview of the software required to download, the online environment and resources, and what to expect for the simulations and the actual flights when they arrive at the field.
  • Performance – Learners individually complete a series of computer-based flight simulations and upload a self-evaluation of their flights. Tutorials about various equipment and procedures are available on the site for learners who need it.
  • Reflection – Instructors lead asynchronous discussions on each flight with the entire class. Learners can also have synchronous or asynchronous discussions with other learners about their flights.
  • Repeat – The computer-based simulations get more and more complex, and eventually, when the class meets at the airfield, they transition to live flights with the instructor where flights, checklists, and evaluations are live.

 

Example – Spreadsheet software, Web-based instruction (no instructor)

  • Framework – This course has no instructor, so the course starts the Framework section by introducing the learner to the software environment, and provides an overview of how the scenarios will run and how to access help as they go along.
  • Performance – The first scenario is to create a simple balance sheet, and it provides figures and criteria for the learner to use as a guide. Links to help and common questions are also provided in case the learner needs assistance.
  • Reflection – This course has no live interaction with other learners or an instructor, so this phase is imitation. The software provides a list of common errors related to this scenario for the learner to browse and select any errors that s/he made. A software-based evaluation is also conducted. Learners are then given solutions for each error they made from the previously selected list – since there may be several possible solutions for each error, the learner selects which solution they want to use and can see a tutorial of the solution, or even go back to their scenario to apply the solution themselves. The software also keeps track of most commonly applied or preferred solutions based on other learner who have taken the same training.
  • Repeat – Scenarios get more and more complex as the learner advances through the spreadsheet training. Common error lists and evaluations correspond to each scenario.

 

Conclusion

A point that has been made so many times that it sounds trite – the use of a fancy new tool does not automatically translate to a good end product. Like an unschooled architect building a home, someone could very logically use the given tools to build a structure that doesn’t end up having a logical function: for example, placing all of the doors on the first floor and all of the windows on the second floor. But even more so than houses, people are complex, and you can’t expect instructional miracles out of a simple Web-based slide show.

Competency models give us a way to associate learning with real business performance that gives our analysis and design a solid foundation. We have the potential to get past ROI that simply places value on counting courses taken and seats filled, or that subtracts travel dollars from the budget.