The growing availability of voice assistants and sophistication of voice computing are drastically changing how people search for and find information, with clear implications for eLearning design.
Voice-directed microlearning or performance support is, no doubt, coming. Soon. But that barely scrapes the surface of how voice assistants and voice-based searches will affect the way eLearning designers create and organize information.
Voice search & one-shot answers
Since its early days, a Google search has returned a selection of resources—a list of answers, potential answers, and websites that address the search topic. This is the heart of keyword-based search.
Searchers typing a query tend to use terse phrases rather than full sentences or grammatically correct questions. When vocally addressing a voice assistant, though, people tend toward the conversational.
The way the a voice-based search engine comes up with answers is different, too.
In Talk to Me, author James Vlahos writes extensively on the differences—and describes the progress of the natural language processing and generation technologies fueling the rapid evolution of voice assistants into conversation partners.
One key difference is that a voice search tends to return a single answer, rather than a list of options to explore.
A single response works well with voice computing
This makes sense. Most users asking a voice assistant are not or cannot look at a screen or follow multiple links.
They don’t want the voice assistant to list off a bunch of websites; they want an answer to their question. Thus voice search is driving an evolution to “one-shot” answers, a single response to a query.
In many cases, this is fine. Questions people ask voice assistants tend toward the functional or immediate: What are the hours of …? How do I get to …? might be asked as the searcher is driving toward a business. What’s the weather in …? What’s the best place for sushi in …? are common queries when planning a trip. These are questions with simple answers that Alexa or Google Assistant can answer easily with information that can be found quickly.
Broader implications of one-shot answers
On-screen search results are also trending toward a similar one-shot format. Google “Snippets” provide what, according to some complex Google algorithms, is the best answer. This answer is highlighted at the top of the screen, with a link to the source.
Another way Google commonly presents a one-shot answer is in a box on the right side of the screen, populated with information taken from its Knowledge Graph, a knowledge base that Google searches use to connect facts and enhance search results.
If you’ve googled a person’s name or topic covered in Wikipedia, the box often pulls information from, and links to, that page.
These responses might be handy, and, for many simple questions, they do present a single correct answer. But they have serious drawbacks.
- Many questions do not have a single correct answer, or that answer changes constantly. Searchers should have the opportunity to see and choose from multiple equally appropriate options.
- More serious, the algorithms do not filter or fact check their sources, sometimes leading to wildly incorrect or inappropriate responses. As pointed out in numerous “when Google gets it wrong” articles, some of these responses are, to understate dramatically, alarming.
- The answer is there, in a much-abbreviated form. While this might be easy for searchers, it reduces the depth of the answer—and reduces traffic to websites that produce the content that answers the searcher’s question. Since traffic is the lifeblood of many content sites, this could have the ultimate result of reducing the number of websites and content sources that thrive and provide information.
- Even without harming the viability of websites, providing a single answer puts Google—or, rather, its algorithm—in the frighteningly powerful position of determining which information people access and see.
Limiting the number of information sources people see
Let’s examine that last point more closely.
The rise of microlearning and workflow learning is often cited as an illustration of how people learn these days. They want quick information, answers to a narrowly focused question, a solution to an immediate problem. When they get that answer from a trusted source, they accept it, apply it, and get on with their day.
This at-work behavior mimics and derives from consumer behavior. In short, people are not applying their critical thinking skills to analyze the one-shot answers or seek confirmation; they assume that Google has done that already.
In the broad context of civic life, allowing a single corporation to essentially determine what information people are exposed to sounds dystopian and dreadful. But, in a narrower context, that flaw turns into a surprisingly useful feature.
Turning a fatal flaw into a key benefit
The most frightening aspect of one-shot answers in a general search might actually be a key strength when considering the model in an eLearning context.
That’s because, in eLearning, the instructors often want learners to learn, remember, and apply a specific answer or set of answers.
As opposed to a general search where there may be many right answers or myriad viewpoints that are worth considering—or where agenda-driven and incorrect answers somehow percolate to the top of Google’s list—many in-the-workflow queries and training questions do have a single correct response.
A tool that would quickly surface that response in reply to a natural-feeling spoken question would be immensely helpful in many eLearning and workflow learning contexts.
Voice computing & the future of eLearning
In an earlier article, “In Search of a Conversational Bot: Voice Computing Evolves,” I describe the evolution of natural language technologies that make conversations with voice assistants, chatbots, and the like feel more natural.
Whether a system is rule-based or relies on ever-improving machine learning of language and facts, it’s far easier to anticipate and thoroughly prepare for conversations on a limited set of related topics than for the anything-goes world of consumer search. And, one hopes, a chatbot or virtual assistant in a closed professional environment wouldn’t face the hordes of bored consumers who delighted in corrupting early attempts at conversational AI, like SmarterChild and Microsoft’s Tay.
In developing an eLearning conversational assistant, a developer could feasibly “teach” the voice computing machine learning algorithm the essential information it needed to answer typical questions and explain the necessary procedures or tasks within that defined set of topics.
While voice computing developers may be far from releasing a usable conversational assistant, it’s not hard to imagine a long list of ways to use this helpful technology in eLearning, and especially to support workers and learners as they apply learning and do their jobs. Someday your favorite colleague and workflow learning aid could be a conversational AI bot!