Learning from well-described examples is demonstrably useful, but getting a sufficient quantity can be a problem. Instructors are time-pressed, and while they may use examples in their teaching, their documentation of those examples can be haphazard.

The question

Is there another source for examples and documentation?

Web 2.0, the era of user-generated content, suggests that perhaps learners could be responsible for annotating examples in useful ways. Moreover, peer review has proven powerful; if the initial annotations aren’t sufficient, can a community approach improve them? There are other reasons to be interested. When students create their own explanations, the learning outcomes are improved. Can we get sufficiently high-quality examples from learners? Will this benefit them? In this study, Peter Brusilovsky and his student I-Han Hsiao investigated the possibility.

The study

Hsiao, I.H. & Brusilovsky, P. (2011). The Role of Community Feedback in the Student Example Authoring Process: An Evaluation of AnnotEx. British Journal of Educational Technology 42 (3).


The domain was computer programming, and they had previously built systems to support using examples for teaching. Here they developed a new system, AnnotEx, which supported learners creating annotations of code examples, reviewing others’ annotations, and revising them. The study itself consisted of learners annotating code examples, and then an experimental group went on and reviewed others’ annotations and commented on them (and had theirs commented on). They could then could go back and revise their original annotations. The outcomes evaluated were the quality of the comments (alone or after revision), the impacts on a knowledge test, and the subjective evaluation of the students after the fact.


First, the results of the knowledge test were not significant. On a possible 10, students did not achieve significance in their pre- or post-test measures. The authors attribute this to the short period of the study, but I notice that the pre-experiment ratings were high at 8.73 for the experimental group and 9.28 for the control group, leaving little room for improvement. There was improvement (9.57 control and 9.6 experimental), but not sufficient to register statistically. Previous studies, however, have shown the benefit.

The quality of the annotations, however, did improve upon review. The initial annotations were somewhat erratic, but the quality of the notations after review was consistently high in the experimental group (a gain of 1.29 versus 0.12, the former of which was significant) according to expert ratings. Student ratings of the annotations also paralleled the experts (a correlation of .93), suggesting that the peer-review outcome was sufficient to be useful without expert review. And peer review didn’t affect high-rated producers of annotations, but significantly benefited weaker students in the quality of their resulting annotations—weaker students’ ratings “more than doubled, from 1.40 to 3.40.”

Finally, the students felt that the experience was valuable, particularly those who benefited most. I should note that the use of technology here scaffolded the process, where an interface design made it easy to accomplish the initial task of annotation, as well as subsequent tasks of commenting on annotations and revising those annotations.

Implications for design and eLearning

So what’s the take-home message? Student annotation of examples is worthwhile, more so when that process itself is made explicit and under peer review. Getting learners involved in the meta-processes of learning has direct implications for the quality of the learning. One can also propose that it also helps learners become self-improving learners. You should consider providing learners with examples that they annotate to unpack the underlying thinking, and then have them review each other’s work constructively. The benefits are multiple.