Could Analytics Kill the Video Star?

Developing a program that produces lastinglearning can require some incredibly nuanced decision-making. From planning todelivery to assessment, each step is land-mined by complexly interwoven (yetoften-contradictory) philosophies and research. So it’s no surprise that, whena technique does start to get close to something like universal acceptance, wetend to latch on to it a little too tightly. And, perhaps, breathe a small sighof relief.

Streaming video has been just such a tool. Theliterature on the topic, which has historically been based on studentself-reporting and outcome achievement comparisons, was comfortably consistent.Students told us that they wanted video, they responded favorably to havingaccess to video (Chen et al, Lambert, and Guidry, 2010), and they generallyseemed to do better when video was there (Demetriadis and Pombortsis, 2007).

But this was always an incomplete picture.Self-reporting is a woefully imperfect way to gather behavioral information,for one thing. How we act, how we perceive our actions, and how we report themto others are three independent variables that do not really enforce anyconsistency or accuracy on one another. Similarly, outcome improvementsassociated with video remain inconsistent, and quiz score comparisons can neverreally tell us how or why one video succeeded where another failed.

We’ve certainly had reason for faith where videois concerned. It remains one of the most provocative communication toolsavailable to trainers and designers, and among the most appreciated bystudents. But we have also lacked key insights about what actually happens oncethe video is released, and we have had to fill many of those insight gaps withassumptions.

Diagnosisin progress

Enter the era of analytics.

Our ability to track learner behaviors may stillbe in its relative infancy, but like an infant it has grown significantlyduring a very short period of time. As it has grown, it has begun to uncovernatural behavior patterns and study habits among online learners that willeventually help us understand what the student experience truly is, and how tobest use our instructional tools.

In the case of video, we’re beginning to see amuch more complex type of learner adoption than we previously (reasonably)assumed. And far from being a certain, direct connection between our developedinstruction and the students’ need to learn, video appears to come with just aslarge a set of “if” and “when” caveats as any other instructional tool.

Consider the example findings below, which oftenclash significantly with self-reporting studies of video adoption.

Sample case studies

1. A three-year study of streaming video adoptioncompleted at an Illinois medical school (McNulty et al., 2011) tracked threestudent cohorts across five different courses. The figures they reported weresurprising; 64 percent of students viewed less than 10 percent of the lecturesavailable to them. In fact, the study credited less than one student in 20 withviewing a “large number” of videos per course.

2. A study comparing system data logging to theself-reported use of streaming videos by more than 5,000 students at twoseparate universities in the Netherlands (Gorissen, Van Bruggen, and Jochems,2012) found marked discrepancies between the amount of video students reportedviewing and their actual usage. In fact, most students in the study neverwatched a full recorded lecture and, in one course that featured 34 videos, themost that any one user viewed was 20.

3. An evaluation of how instructor-developedvideo impacts student satisfaction levels unintentionally uncovered aninteresting trend: The students who accessed the videos were markedly lesslikely than their peers to download the other instructional materials in thecourse, even though the other files contained unique content that was notavailable in the videos (Draus, Curran, and Trempus, 2014).

4. In 2014, researchers tracking learner viewpatterns and interaction peaks (Juho et al., 2014) found that more than 53percent of student viewers exited a five-minute instructional video early, withmost of those dropouts occurring in the first half of the video. The percentageof dropouts rose as the video length increased, with the dropout rate climbingas high as 71 percent for videos over twenty minutes in length. Re-watchersshowed even higher dropout rates, as did learners accessing step-by-steptutorials as opposed to lectures. This may support the idea that students go inpursuit of the information they think they need (such as a visual example ofhow to complete the third step of a four-step process), when they think theyneed it, rather than perceiving a complete instructional video as essentialviewing.

5. A massive review of edX courses (Guo, Kim, andRubin, 2014) involving data from more than 127,000 students found that even theshortest videos (zero-to-three minutes in length) lost a fourth of all viewersin the first 75 percent of the video’s runtime, meaning a large number oflearners would have missed a substantial portion of the instruction even whenthe attention request was miniscule. The numbers were again even morepronounced for longer videos. These figures are particularly telling, as theauthors had discounted any session lasting under five seconds from the figureto prevent their data from being skewed by accidental plays.

6. Another MOOC-based review of a large MITxcourse (Seaton et al., 2014) found that nearly a fourth of all successfulcertificate earners accessed less than 20 percent of course videos.

Lessonslearned

Again, any user data-based discussion ofeLearning must be built on the understanding that this is only the beginning.It’s early, yet, for developers to start drawing broad conclusions. Even so,the growing collection of research does begin to make video look a lot likeevery other tool in our toolkit: there are conditions under which it iseffective, and conditions under which it is definitely not. Fully detailing andunderstanding those conditions will take time, and will eventually help us tosee how variables like program type, subject area, and learner background caninfluence video adoption rates.

Until then, because video can be expensiverelative to other eLearning development work, we can’t help wanting to gleansome early insights from what we’ve read. At the very least, by following theprinciples below, we can attempt to prevent committing costly resources tocircumstances that will fail to provide acceptable educational returns.

Principlesfor the future

Do…

Don’t…

…expect students to respond favorably to the inclusion of streaming video.

 

…treat favorable learner response as proof that students are actually watching the videos.

…leverage assessments and assignments as tools to help you encourage video adoption.

…ask students to view a video without an immediate follow-up activity or purpose.

 

 

…make videos accessible to students during review periods, and tag them so that content is easy to locate outside of its original instructional context.

…discount other development tools when planning instruction. The educational ROI on video may not always justify its use as a front-end instructional tool. Consider your content and objectives.

 

…keep content videos short (three-to-five minutes) whenever it is appropriate to do so.

…assume that a short video length will be enough to guarantee full student adoption.

 

…make it clear to students what content is covered in each of the available educational resources.

…assume that students will recognize that other files or media in a lesson will contain unique, essential information not found in the video.

Workscited

Chen, P.-S. D., Amber D. Lambert, and Kevin R.Guidry. “Engaging Online Learners: The Impact of Web-based Learning Technologyon College Student Engagement.” Computers& Education. 54, 4. 2010.

Demetriadis S, and Andreas Pombortsis.“e-Lectures for Flexible Learning: a Study on their Learning Efficiency.” Educational Technology & Society.10.2007.

Draus, Peter J., Michael J. Curran, and MelindaS. Trempus. “The Influence of Instructor-generated Video Content on StudentSatisfaction With and Engagement in Asynchronous Online Classes.” Journal of Online Learning and Teaching10.2. 2014.

Gorissen, Pierre, Jan Van Bruggen, and WimJochems. “Usage Reporting on Recorded Lectures Using Educational Data Mining.” International Journal of Learning Technology7.1 2012.

Guo, Philip J., Juho Kim, and Rob Rubin. “How VideoProduction Affects Student Engagement: An Empirical Study of MOOC Videos.” Proceedings of the first ACM conference onLearning at Scale. ACM, 2014.

Kim, Juho, et al. “Understanding In-Video Dropoutsand Interaction Peaks in Online Lecture Videos.” Proceedings of the first ACM conference on Learning at Scale. ACM,2014.

McNulty, John A., et al. “A Three-year Study of LectureMultimedia Utilization in the Medical Curriculum: Associations with Performancesin the Basic Sciences.” Medical ScienceEducator 21.1. 2011.

Seaton, Daniel T., et al. “Who Does What in a MassiveOpen Online Course?” Communications ofthe ACM 57.4 2014.

Share:


Contributor

Topics:

Related