Adventures in the xAPI: The Ann Arbor Hands-on Museum Project

As the eLearning world begins to adopt and develop the xAPIstandard, it’s not hard to find products, tools, and platforms that offer xAPIcompliance. But as an industry, the xAPI is still new enough that it’s hard tofind actual examples of it at work in the real world. And yet, those storiesare important cases to guide us as we move forward. This is the first articlein an “adventure series” that will explore the xAPI in its natural habitat.

If you’ll forgive me, in the first couple of stories I’ll shareprojects my team is working on. After that, the sky’s the limit, the more themerrier, and so on. Do you have stories to share? Drop me a line—I’d love to hearwhat you’re doing and help you share your story.

RFED—the Ann Arbor Hands-On Museum’s personalized interactiveexhibits

(This project won the first xAPI Hyperdrive at DevLearn inOctober, 2014.)

The Hands-On Museum is a children’s STEaM (science,technology, engineering, art, math) museum based in Ann Arbor, Michigan, USA.Museum educators were eager to offer an enhanced field trip experience forschool children, one that would show quantitative data related to the learningexperience and the science standards that schools and teachers are held to. Atthe same time, they wanted to provide a richer experience at each exhibit,slowing the kids down long enough to interact meaningfully with the exhibit andlearn something more.

To do so, the museum would need to track which students wereat which exhibits and what they did there. The resulting data also helpsteachers tie their museum field trip visits to curriculum standards to ensurethat their time (and their budget) is spent meaningfully. The RFED project isan ongoing effort to meet all these needs and, at the same time, explore somecool technology and connect with the community. (RFED is a play on theunderlying RFID technology and ED for education.)

What it does

In the first iteration of this project, the museum used RFIDtags embedded into nametag lanyards to passively log students into each exhibitas they came into range of a wall-mounted antenna. Here’s how it works: As thechildren come into range, a tablet computer mounted nearby greets them by nameand engages them in a short series of questions and explorations with theexhibit, which is typically an analog device of some sort (Figure 1).


Figure 1:
As the children come into range, a tablet computermounted nearby greets them by name and engages them in a short series ofquestions and explorations with the exhibit

Data from each student activity is then sent back to the LRS(Learning Record Store) immediately after the student(s) complete theinteraction, or after they “log out” by leaving the area (as determined by theantenna). Teachers and museum staff can then access a dashboard showing anactivity stream and some simple data visualizations by student and by exhibit.A simple search function allows teachers to search for specific text stringsfound in either the xAPI statements or in the text responses typed in bystudents.

As this project moves into Phase 2, the somewhat finickyRFID technology is being replaced by beaconsthat offer more granular proximity tracking and a more robust signal. As itturns out, kids wiggle a lot and don’t stand with their name badges neatlylined up in front of an antenna. Go figure!

How it works

When a student—or group of students—approaches the exhibit,the antenna picks up on their name badges and the tablet opens up theappropriate grade level content. One of the advantages of the xAPI over SCORMis that it can record the same activity data for multiple learners at once.Rules are being worked out to determine what happens when students frommultiple grades approach all at once.

Each time a question is answered or a response given on thetablet, an xAPI statement is formed. To allay data privacy concerns with thechildren, each student is assigned a badge with the name of a famous scientist,engineer, inventor or mathematician. Back at school the teacher simply matchesup the student with the correct badge. (At the same time, it’s an opportunityfor even more learning about the person on the name badge!)

Here’s the corresponding activity data sent to the LRS andretrieved:

To follow along the xAPI statement structure of Actor-Verb-Object-Context,

Actor = Ada Lovelace

Verb = answered

Object = “It isn’t big enough to get into space”

Context = “Blast Off” (the name of the exhibit) and“Q3” (Question #3 in the interaction)

Admittedly, a multiple-choice interaction like this examplefrom the original prototype is pretty SCORM-like. You could accomplish many ofthe same things using an LMS and some spiffy reporting tools. However, the xAPIallows us to ask and record more complex interactions like those in Figures 2and 3.


Figure 2:
An example of a more complex interaction enabled by useof the xAPI



Figure 3:
Another example of a complex interaction

The data is sent to a LearnShare LRS for storage. A separatedashboard page requests these statements from the LRS to create the nearly-real-timereport. A key benefit in this application is that the xAPI statements easilyenable a human-readable activity stream. While this is interesting, andsometimes fun to read, in any sort of quantity it soon becomes overwhelming andconsiderably less useful.

We used JavaScript widgets to create simple graphs toquickly and concisely display activity by exhibit and by student, but there’s stillmore data available that we could use to create whatever meaningfulvisualizations we might require in the future (Figure 4).


Figure 4:
Simple graphs quickly and concisely display activity byexhibit and by student

As I mentioned earlier, the push for teachers to connect alltheir instructional decisions to state and national standards is becoming moreprevalent. To assist teachers and school districts in aligning field tripexperiences with these standards, Phase 2 xAPI reporting will include thestandard(s) that the exhibit and corresponding tablet interaction address. Inthis way, teachers not only have quantitative data about how studentsinteracted with exhibits but also how those interactions helped that studentmove toward standards mastery.

The ah-ha!

We learned a number of things on this project and wecontinue to learn more with each new iteration. Some of the key findings so farinclude:

RFID was somewhat problematic with highly mobile learners.Beacons will likely work better, although they will also raise the cost of eachname badge considerably.

When we think about tracking interactions, it’s very easy toget caught in question and answer, SCORM-like quiz questions. The first fourinteractions we built all fit this model. The later interactions started topush the boundaries a bit more, collecting data, free-form text, multi-partconstructions and so on. In future phases, the xAPI will allow the recording ofobjective data from the exhibits themselves.

We discovered one of the interesting findings the night ofthe event that we released RFED. Students were completely unimpressed: ofcourse their technology knows them by name! And it’s nothing new to them tohave tablet-based learning interactions. The biggest fans of RFED were teacherswho were very excited about the ability to get real data in real time abouttheir students. The fact that the dashboard offered multiple easy-to-useoptions for using the data was even better.

In a SCORM world, a user must generally be logged into asystem to record their individual progress and interaction. But the xAPI allowsus to record the activity of anonymous users in any location, a choice thatcould be made due to privacy concerns or just to make it easier to gatheranonymous data without a complex system of authentication that can slow downuser interaction (especially kids running around a science museum).

For RFED, we fed the predetermined scientist names(triggered by their RFID badges) into the xAPI statements in place of a user’sreal name, and the LRS just records the statement—at its most basic, it doesn’tcare who people are. Then those who have access can do what they want with thedata—display, report, analyze patterns, course-correct, justify decisions, etc.

We’re looking forward to learning more from thisproject as we continue with the next iterations. Phase 2 will include beacons,a new front end login, 20 exhibits, three grade levels and enhanced reportingfor the museum, teachers and students. Stay tuned!

Share:


Contributor

Topics: