Learningfrom well-described examples is demonstrably useful, but getting a sufficientquantity can be a problem. Instructors are time-pressed, and while they may useexamples in their teaching, their documentation of those examples can behaphazard.
The question
Isthere another source for examples and documentation?
Web 2.0, the era of user-generated content,suggests that perhaps learners could be responsible for annotating examples inuseful ways. Moreover, peer review has proven powerful; if the initialannotations aren’t sufficient, can acommunity approach improve them? There are other reasons to be interested. Whenstudents create their own explanations, the learning outcomes are improved. Canwe get sufficiently high-quality examples from learners? Will this benefitthem? In this study, Peter Brusilovsky and his student I-Han Hsiao investigatedthe possibility.
The study
Hsiao, I.H. & Brusilovsky, P. (2011). TheRole of Community Feedback in the Student Example Authoring Process: An Evaluationof AnnotEx. British Journal ofEducational Technology 42 (3).
Methods
The domain was computer programming, and theyhad previously built systems to support using examples for teaching. Here theydeveloped a new system, AnnotEx, which supported learners creating annotationsof code examples, reviewing others’ annotations, and revisingthem. The study itself consisted of learners annotating code examples, and thenan experimental group went on and reviewed others’annotations and commented on them (and had theirs commented on). They could thencould go back and revise their original annotations. The outcomes evaluatedwere the quality of the comments (alone or after revision), the impacts on aknowledge test, and the subjective evaluation of the students after the fact.
Results
First, the results of the knowledge test werenot significant. On a possible 10, students did not achieve significance intheir pre- or post-test measures. The authors attribute this to the short periodof the study, but I notice that the pre-experiment ratings were high at 8.73for the experimental group and 9.28 for the control group, leaving little roomfor improvement. There was improvement (9.57 control and 9.6 experimental), butnot sufficient to register statistically. Previous studies, however, have shownthe benefit.
Thequality of the annotations, however, did improve upon review. The initialannotations were somewhat erratic, but the quality of the notations afterreview was consistently high in the experimental group (a gain of 1.29 versus0.12, the former of which was significant) according to expert ratings. Studentratings of the annotations also paralleled the experts (a correlation of .93),suggesting that the peer-review outcome was sufficient to be useful withoutexpert review. And peer review didn’t affect high-rated producers ofannotations, but significantly benefited weaker students in the quality oftheir resulting annotations—weaker students’ ratings “more than doubled, from1.40 to 3.40.”
Finally,the students felt that the experience was valuable, particularly those whobenefited most. I should note that the use of technology here scaffolded theprocess, where an interface design made it easy to accomplish the initial taskof annotation, as well as subsequent tasks of commenting on annotations andrevising those annotations.
Implications for design and eLearning
So what’s the take-home message? Studentannotation of examples is worthwhile, more so when that process itself is madeexplicit and under peer review. Getting learners involved in the meta-processesof learning has direct implications for the quality of the learning. One canalso propose that it also helps learners become self-improving learners. Youshould consider providing learners with examples that they annotate to unpackthe underlying thinking, and then have them review each other’swork constructively. The benefits are multiple.








