Your cart is currently empty!

QA Test Strategies for Mobile Learning

Pity today’s mLearning developers. They already wearnumerous hats just to do the essentials of their job. More hats than DonDraper. More hats than Pharrell Williams. More hats than TV shows on Netflix. Well,OK, that last one may be an exaggeration.
But the truth is that mLearning developers need to wear alot of hats to do their jobs, and now they must wear a new hat, a QA engineerhat.
What is QA?
In some circles—particularly in the biopharmaceuticalindustry—quality assurance (QA) is defined as a verification of processes and quality control (QC) is averification of products, which wouldmake QA an inappropriate term for testing and verification of mLearning products.
However, in the software-development industry, QA engineersverify software products, and thesoftware development world tracks very closely to the tasks andresponsibilities for today’s mLearning developer. To ensure that their“product” works for their end-users, mLearning developers must QA test and validatetheir mLearning software applications.
While training professionals have always done “qualitychecks” of training courses through content reviews, dry runs, beta classes,and so on, the migration to eLearning and mLearning has increasingly changedthis process to a software quality check. And the role that handles that taskis now equivalent to that of a software QA engineer.
Why is this important?
The advent of eLearning pushed us into relatively simple QAtesting because we needed to test for things such as multi-browsercompatibility and proper behavior on a specific LMS. However, the landscape fortoday’s mLearning developer is much more complex, and testing needs to be muchmore thorough. Learners now have a plethora of hardware devices running manydifferent operating systems and potential environments for mLearningapplications, making it increasingly important to have a more formal QA testingand validation system.
How can an mLearning developer possibly hope to test andvalidate with all of the potential variables that will be used by theiraudience? The answer lies in an organized QA testing strategy, and that beginswith an effective working knowledge of QA testing concepts.
The Quality assurance framework
Let’s start by examining categories of QA testing that willhelp to organize our test strategy. There are three commonly accepted categoriesthat are focus areas for testing.
- Functionaltesting: Tests that verify all technical functionality on all certifieddevices and platforms.
- Non-functionaltesting: Tests for non-functional areas such as performance, security, andthe user interface.
- Acceptancetesting: Tests and validation by subject-matter experts (SMEs) to determinewhether the lesson meets content requirements for the target audience.
Functional testing
Of the three categories of testing, functional testing isdefinitely the most crucial to ensure success for an mLearning lesson. The “oldschool” approach to functional testing was for the course developer to runthrough the course and ensure it behaved properly on their own workstation. Thisis clearly insufficient for mLearning development and you must include additionalsteps.
First, developers can no longer handle this task on theirown. It may be reasonable to expect developers to test on one desktopenvironment, one tablet device, and one smartphone. However, if you need tovalidate on more devices than that, the best strategy is to establish a QA testingteam, and you should establish that team before the development process begins,not just before the testing processbegins.
In addition, a formal, written QA test plan should be movedup from a nice-to-have to a requirement. There are many excellent templates forQA test plans on the web, but if you want to avoid the formalities, you shouldat least include:
- What features you are going to test
- What platforms and devices you are going tovalidate
For example:
- All clicks
- All audio
- All animations and transitions
- All hyperlinks, including all branching pathways
- All triggers, as applicable
- All variables, as applicable
- On current versions of Google Chrome, InternetExplorer, Mozilla Firefox, and Apple Safari
- On iPad 4 and Samsung Galaxy Tab 10.5
- On iPhone 6, Samsung Galaxy 5, and Windows Phone8 or 8.1
You should also be aware of several other types offunctional tests that you may need, depending on your content and audience. (Andeven if you never use these tests, it will help you as a professional to befamiliar with what they are.)
- Load testing:Performance testing that identifies areas in the application that may causeunacceptable wait times for learners. Sophisticated tools are available for thistesting, but an online stopwatch can often do the trick.
- Gorilla testing:Testing things that only a gorilla would do. For example, click the browser’sback and forward buttons to see what happens. Often requires a littlecreativity—but this can be a very fun exercise!
- Regressiontesting: A repeatable test that you do after each new publishing. A goodidea to do this at regular intervals during the development process.
- Smoke testing:A high-level test to ensure nothing catches fire. OK, seriously, smoke testingis a set of tests that ensure that the most important functions work. If all ofthem work, the product is stable enough to proceed with further testing.
Non-functional testing
For mLearning content, the most important non-functionaltest is readability. This is especially true on smartphone devices, where yourgoal should be to avoid the need for learners to have to use zoom-in gesturesto view at least the core content. Optionally, you can add other non-functionaltests such as user access, user security, conformance to marketing standardsand product branding, etc.
But no matter what non-functional test you choose toinclude, make sure it gets added to the test plan!
Acceptance testing
As with any type of training or learning content, it willhelp your SMEs if you define their responsibilities for reviewing and acceptingyour mLearning application. Unlike othertypes of training or learning content, it can be helpful to use a littlecreativity with the scope of the SME review.
For example, if you select your SMEs with consideration tothe devices that they own and use, you can request or require that your SMEtest the content on more than one device, which could validate your applicationon additional devices.
Of course, it is important to note that the scope of an SME reviewis often much different than the scope you would use on functional testingvalidation. The typical scope for an SME review may include:
- All written content
- All narrated content
- All assessments
- All/any omissions
If you are going to validate your mLearning for a devicebased on an SME review, you should add at least a few key functional tests totheir acceptance test requirements. And as with functional and non-functionaltests, make sure to include the SME’s review tasks in your QA test plan.
Applying the QA testing framework
In addition to the aforementioned strategies, there are twoother key areas that can make your QA testing tasks much easier.
First, the development tool that you select can dramaticallyimpact the interface issues that you have to adjust to ensure that your contentworks properly on each device. Just about all development tools now have acheckbox for HTML5, but the capabilities are very different from tool to tool.
Before you start your project, test your tool on the devicesthat your learners will use. DO NOTsimply select or use untested a tool that claims it outputs HTML5!
Next, the development process that you use can also have abig impact on your QA testing burden. For example, if you use the ADDIEdevelopment model with storyboarding reviews, you may have a mountain ofsurprises at the end of the development process when you finally put the lessoninto mLearning mode. As a result, you may wind up doing significant re-designat a point where you should be focusing on adding final content to the lesson.
On the other hand, a rapid prototyping development processsuch as Agile or SAM can help to identify design issues up front and can evenfoster creativity with other features that you may not have envisioned in thecontent outline or design spec phase. In addition, reviewers will be able tooffer more meaningful suggestions for making your application a success.
Saved by a disclaimer?
No matter how many devices you test and validate on, it isalmost impossible to cover all possible device combinations. If nothing else,think about how many of your validated devices will be upgraded within 12months of your mLearning go-live. (Answer = most?)
Whenever possible, create a disclaimer statement that is inthe beginning of your mLearning content. For example:
“This content is designed for desktop and mobile devices. Itis best viewed on Chrome, Internet Explorer, or Firefox browsers, iPad orGalaxy Tab tablets, and iPhone, Galaxy, or Windows smartphones.”
Even if a learner does not have one of those devices, theywill appreciate the warning and will be more willing to accept minor issues asthey navigate through the content.
Your new hat
Congratulations! You’re now ready to wear your new hat.
The truth is that you have probably had this hat for quite awhile but never thought about it as a QA hat. Perhaps it was simply a check-your-workhat or a be meticulous hat. But now that you have a QA hat, it should help youin several areas:
- You will be able to organize your lesson testingmore effectively
- You will have better assurance that you aretesting your lesson in all key areas
- You will have a new vocabulary that will help toconvince people that you know your stuff
- You may (with a little luck!) be able to gainresources to help you with the testing process






