Your cart is currently empty!

Use A/B Testing in eLearning to Add Choice, Show Value

Marketers and journalists employ A/B testing to measure therelative effectiveness of content, headlines, images, and more. Lynne McNamee,president of Lone Armadillo Marketing Agency and a speaker at The eLearningGuild’s recent LearningPersonalization Summit, suggests using A/B testing in eLearning to measure—andimprove—effectiveness.
A/B testing refers to trying out two variations of a design,content item, title, or other element and measuring the response. In marketing,the desired response might be click-throughs or completed purchases; forjournalists, the goal might be shares, click-throughs, or longer time spent onthe page. What’s in it for eLearning designers and developers?
Information. Using A/B testing is an opportunity to gatherdata. “One of the big ones I can suggest is the time it takes people to get theinformation that’s being conveyed,” McNamee said.
It’s also a way to increase choices for learners, which canmake eLearning more engaging and more effective.
Test only one variable at a time
For an A/B test to guide design, the change must be limitedto a single element of the design or content. An A/B test could be performed tocompare a huge range of variables, such as:
- Response to different color schemes or typefacesand sizes
- Ease of use of two navigational paradigms
- Response to different storylines
- Comprehension of text-based content vs. aninfographic
- Response to text presentation vs. video
- Comprehension of video vs. video with transcriptand/or closed captioning
- Engagement with text headed with two differenttitles
- Application of knowledge after playing a game tolearn a skill vs. reading instructions
- Performance after using a simulation vs.completing asynchronous eLearning module and answering questions at the end
Multivariate testing can be done, McNamee said, but it doesn’ttell you which variable is causing the difference in response or results, so it’snot all that helpful in guiding design. If you want to test three options—say, theeffectiveness of using text, video, and an infographic to present the samecontent—she advises doing two separate A/B tests. In each one, she said, thetext group serves as the control. You’d test the text vs. the infographic, thentest (with different learners) the text vs. the video.
She advises starting with a simple test: “Have two differentversions—one that has the same information presented as text and one that has,say, an infographic. Not in the assessment; in the actual course delivery. Andthen see how long it takes someone to go through that section.”
Offering choices to learners
Focusing on A/B testing to identify the stronger option anduse only that—a common use of A/B testing in marketing or journalism—could be amistake in eLearning. McNamee emphasizes increasing learner choice. “Have theemployee self-select,” she said. “If you have the choice between having someinformation as text or having an infographic, which would you prefer?”
McNamee continued, “You look at the general educationalsystem and teachers are trying different approaches. That sort of variation andindividual preference has been reinforced [with K–12 and college students] over12 to 16 years, so to then abandon it and ‘everyone has to learn things thesame way’ just seems counterintuitive to me.” She points out that younger employees,people now in their 20s and 30s, have been learning on tablets andpersonalizing their digital environments practically “since birth.”
On the other hand, people might self-select a modality thatis not ideal for them. McNamee offers a solution to that conundrum: “Theychoose, and for that section of the module you show that version the first timethrough. If they don’t pass that piece of the assessment, when you redeliverthe course, you actually show the alternate version,” she said. This is moreeffective than repeatedly showing the same content, the same way, when thelearner is clearly not mastering it. Having options therefore benefits both thelearners, who have more control over their learning, and the organization,which can immediately present an alternative to a learner who is struggling.This echoes ideas inherent in universaland user-centered design approaches, such as “plus-onethinking.”
A/B testing requires clear goals
Whether an L&D team uses A/B testing to narrow down the optionsthey will offer to learners or to expand the options available, the testing isuseful for gathering data. “All of this is capturing data and then using thedata to deliver a better experience and a more personalized experience,”McNamee said.
But it’s essential to know what the longer-term goal is. “Whatare we trying to improve? What are we trying to learn? It almost seems like it’sa blank slate. So many organizations haven’t been using an LRS; they haven’tbeen capturing much beyond the registrations and the completions,” she said.Most LMSs do not capture data about which content learners engage with and forhow long, but an LRS-based, xAPI-compatiblesystem can capture that—and more.
Knowing how learners engage with content is a startingpoint, not an end goal, though. Tying learning goals to business goals is atheme for McNamee. She pushes back against the idea that encouraging learnersto spend more time with an eLearning module is, itself, a desirable goal,emphasizing instead the business goal underlying much corporate training:improved productivity. “The time people spend learning is time they’re notdoing what the boss hired them to do,” she said. And, “increasing learnerengagement” is a nebulous target. “What’s engaging and motivating for oneperson is not the same as for someone else,” McNamee said.
Rather than trying to get people to engage with learning, L&Dmanagers might ask: “Can they pass the assessment the first time and in lesstime?” If so, then that’s an indication of effective eLearning. “These arereally smart, passionate professionals. They know what they want to do. If westart with that level 5 of Kirkpatrick, of those business measures, that’s whatI think one of the biggest hurdles is.”(TheKirkpatrick Model considers four levels of evaluation for training:participant reaction, learning, behavior, and results; Dr. Jack Phillips addeda fifth level, measuring ROI.)
Earn a ‘seat at the table’
Gathering A/B testing data helped marketing professionalsshow their value. “Once we started to get the data, that’s how we started toget more resources,” McNamee said. “We kept seeing actual results. We wereseeing the effectiveness. We were seeing that we could close a deal much moreclosely, we could get repeat behavior, we could develop loyalty, we couldspread the word. We were getting people to achieve what we were after themachieving.”
McNamee emphasizes that marketing professionals know whatL&D can learn from marketing because they’ve been in the same position. “Everyonekeeps talking about having a seat at the table. Well, this is how marketersreally did it,” she said. With data.
“The learning department should be integrated throughout theorganization, I think, and have that valuable voice,” she added. “They’rebringing a real value and service to the organization, but their objectivesneed to be aligned with the business objectives.”
Start small
A small L&D team might quail at the idea of creatingmultiple eLearning products with the same content, but it’s possible to startsmall, gather some data, and make the case for additional resources. “You’rechanging the mindset of your organization; you’re showing this proof to themanagement that this is how they are going to see those results,” McNamee said.
The biggest mistakes people make are trying to do too muchtoo fast and not having a way to measure what they are testing, McNamee said. Beforegoing all-in on a big initiative, test; find someone in the organization with aspecific problem, and offer to help solve it. To apply A/B testing ineLearning, start with a simple A/B test, presenting the same information using twodifferent modalities. Figure out which one works best, then use that as thecontrol to test another option, she said.
“Know what is the business problem that you’re trying tosolve—not what learning problem you’re trying to solve,” she said. “Let thebusiness metrics clearly define the path—and make sure you have some way tomeasure it.”






