In last week’s Spotlight, I shared with you the story of the Ann Arbor Hands-On Museum’s personalized interactive exhibits. In that project, we used RFID tags and the Experience API (xAPI) to create an enhanced field trip experience for school children. The combination provided a richer experience for the children by slowing them down long enough to interact meaningfully with the exhibits and learn more. At the same time, the project also was able to show quantitative data related to the learning experience and the science standards that schools and teachers are held to.

This week, I’ll show you how we are using the xAPI and tablets in a public health education project in Michigan.

The Stroke Ready App

This project is in pilot testing right now in select medical clinics in Michigan.

The University of Michigan Health System’s Stroke Center team wanted a way to educate the public to recognize symptoms of a stroke and to get potential stroke victims to the emergency department quickly. Speed is of the essence as timely treatment and medication can dramatically reduce the impact and severity of stroke. The team’s idea was to deliver quick, video-based learning via tablet to non-stroke patients sitting in clinics and doctors’ offices while waiting for their routine doctor appointments. The team will then evaluate an immediate increase in knowledge (Kirkpatrick Level 2 assessment) as well as longer term impact on stroke recognition and early treatment (Kirkpatrick Level 3 assessment).

What it does

Medical assistants (MAs) in primary care practices offer tablets with the Stroke Ready app to patients in the waiting room. The initial target audience is a low-literacy population, so there’s very little text on screen and information is largely audio-visual. The learner chooses a narrator/guide for their experience and watches an informational video about responding to stroke. Then the learner can watch one or more video scenarios and select how to respond in each case: wait and do nothing, drive to the hospital, call 911 for an ambulance, or call the doctor.

The tablets may not always be connected to the internet during use, so the transactions are stored locally on the tablet and transmitted once the tablet connects to Wi-Fi.

How it works

On its face, this doesn’t strike one as a case for the xAPI. The selections and answer choices made by the learners in this project could mostly be handled by SCORM with a little creativity. However, there are a few things about this project that make it a solid xAPI solution.

The tablets are offline while the learners use them. A number of course development tools and LMS products provide offline SCORM players, but the xAPI was designed to work disconnected from the Internet. It constructs and stores the activity statements locally. Once connected to the Internet, it sends the statements, checks to ensure receipt, and then clears the statements from local storage to avoid duplicate sends later on.

Learners are allowed multiple attempts at each video scenario, and with the xAPI we record each attempt. This allows us to compare success rates over time and experience. (We may be the only people eager for longer wait times in the doctor’s office!) Although some more sophisticated LMS products do handle repeated attempts in a traditional SCORM approach, only the last attempt is stored and reported in the LMS. In the screen shown in Figure 1, the user selects a narrator/guide for the interactions. This selection is sent as an xAPI statement, which is further subdivided by gender and race.


Figure 1:
The user selects a narrator (guide) from this screen for the interactions with the app

In Figure 2, the user can choose a scenario to watch and think about what they would do in the situation. They can come back and watch the other scenario later, in which case another xAPI statement is recorded.


Figure 2:
The user selects a scenario from this screen

In Figure 3, the user chooses an action based on the scenario they just watched: do nothing, phone 911, phone the doctor, or drive to the hospital. This is simply an image-based multiple-choice with branching based on the answer—but statements are kept each time a choice is made and whether it was right or wrong. (FYI … call 911 is the right answer every time when a stroke is involved!)


Figure 3:
At this point, the user selects an action based on the scenario: Do nothing, call 911, call the doctor, or drive to the hospital

A variety of aspects of the learning experience are recorded, not just the correct or incorrect answers to the scenario questions. In this case, the learner’s demographics, the narrator/guide they select, and the scenario(s) to which they’re drawn are all interesting data points that we’re collecting with xAPI statements. (Figure 4)


Figure 4:
The app collects demographic data, using xAPI statements

The xAPI is flexible in the way it allows an Activity Provider to determine what information is meaningful for the context. There are standard ways of constructing statements, of course, and as long as you have the actor, verb, and object you’ve met the minimum requirements—but you can feed more information into the statement as required. For example, is a question part of a larger section? Are these smaller interactions part of a group? In this case, we have the advantage of being both the Activity Provider and the report creators. This means we can tailor our statements and reports to capture and display exactly what we want in the manner that is most useful, while also being valid xAPI statements that we can view and/or transfer to any LRS and maintain their usefulness. We can then export this data to a spreadsheet for further analysis or filter it on the spot for quick access to specific data points.

The AHA!

Using SCORM, a learner is often required to log into a system to record their individual progress and interactions. But the xAPI allows us to record the activity of anonymous users in any location, an advantage when there are privacy concerns or if you want to gather anonymous data without a complex system of authentication or a cumbersome backend.

For this project, we don’t want or need to know a user’s personal information. The app is targeted initially at low-literacy audiences who may not be comfortable with keyboards and email addresses, and being required to divulge their information could possibly hinder their participation in the project. Since we’re already using local storage to keep xAPI data, we are also able to use it to give each user a distinct “instance” each time the app is launched. The app assigns a number to each app launch and keeps this number in local storage. This number serves as the user’s “name” in xAPI statements during the session. The next time the app is restarted or relaunched, it increments the number to give the next user a unique name. Of course, all of this is invisible to the user. The upshot is that users participate anonymously and the research team can collect important information from an aggregate of multiple users.

This is a first iteration for this project. After leaving it in the field and collecting data for a few months, the team will regroup and assess what upgrades to make. It’ll be key in this process to account for the existing data structure as we add and change things for future iterations if we want to be able to report across both iterations. In the old SCORM world, this was never a factor. In the increasingly powerful and flexible world of the xAPI, we need to plan for things like this.