What if learners executed 40,000 searches for content in your LMS every day? Every hour? How about every minute? It is estimated that Google processes over 40,000 search queries every second worldwide. That’s 3.5 billion searches per day.
Why does this matter for the eLearning community? Employees—learners—are contributing to those billions of searches. They are used to typing a question into the search bar and getting a selection of responses that provide what they are looking for. “Ask and ye shall receive.”
Reaching the level of trust and expectation that people have about how searches should work took a lot of conditioning over the years. In 2004, Google started testing a “suggest” feature in its search bar; it rolled out “Autocomplete” in 2008. In 2010, “Google Instant Search” was launched, which Google itself referred to as a “fundamental shift in search.”
As explained by Search Engine Land, “Google Instant is a feature that predicts what you’re searching for and shows results as you type. It uses Google’s autocomplete technology to show predicted search terms in a drop-down box, and begins to display search results below the drop-down. As you continue to type, both the predicted queries and the search results change.”
Though this functionality was dropped in 2017, it had a fundamental influence on how billions of people were “trained” to search for information online. This has significant ramifications for L&D: Google has conditioned the way learners think about posing questions. And, regularly reinforced experience with Google searches has shaped how learners expect to get answers.
Asking questions
Fundamentally, Google is a place we go, not for answers, but rather to ask questions. It is the Zoltar machine in the amusement park to which we pose our challenge in hopes of getting a quick, reliable, and actionable answer. The results Google delivers—those myriad websites and images and videos—are the answers. Google helps us get to the right results, selected from the estimated 4.46 billion pages it had indexed or cataloged as of September 3, 2018.
To rapidly do this, though, is not easy. Google search streamlines this process by limiting the ways people ask questions—which it accomplishes by prompting options that are worded consistently. By encouraging users to choose from supplied options, Google reinforces the wording of those options, rather than the thousands of other ways the questions could be phrased; it also “trains” searchers to formulate questions in a way that Google can process quickly.
Providing answers
Content providers creating eLearning materials, images, videos, etc., have had to adjust over the years to prepare content that appropriately appears in search results. This affects what appears on the page, as well as meta descriptions, titles, and headings, which have to be in alignment with how people search for content.
It makes sense; if someone has a problem for which they are looking for an answer, they are likely to choose an option that matches their search. Even though there might be 10 other options which are better produced, better presented, or better researched, if that content doesn’t match how the person searched, it will sit in oblivion.
Thus, people have been taught not only how to ask a question, but also to expect the answers provided to look a certain way. Ubiquitous use of Google search has conditioned and regularly reinforced people to think about how to ask a problem and what constitutes an answer.
As online search has become the first line of attack for solving many problems, most learners have mastered Google search skills. Bringing this familiar digital consumer behavior into L&D can increase learner buy-in and engagement.
LMS search should embrace Google’s approach
Most LMS platforms have or are introducing search functionality. Therefore, it is important that LMS administrators understand how Google, by far the largest search engine, works. While some L&D professionals might hope that search within the LMS could operate differently, the effort required to overcome years of searching behavior that is reinforced multiple times a day would be enormous—and wasted.
Rather, embrace it.
This means adjusting the meta tagging, titles, and descriptions of content at a minimum, and likely some of the content presentation itself, to ensure that the best content for a learner’s query shows up, ideally at the top of the results.
This requires a foundational shift of viewpoint—one that marketers regularly face with sales departments: Sales wants to tell people about their product or solution. Marketers want to listen and respond to what someone is asking. In eLearning, there is frequently a similar dynamic to what exists in the sales department.
If, rather, than telling learners about the content, eLearning developers used YouTube titles and Google Searches as models for how people are asking and how people are answering questions, they could improve both the content and the coding of content within the LMS, leading to a better LMS search experience for learners.
Google search leverages a number of factors that influence which results are displayed and in what order. These are important to keep in mind as, again, learners have been conditioned to expect targeted results, even though they might not realize what factors determine what constitutes a “targeted result.”
- Location: Google shows what is relevant to where the searcher is located, to the level of state and city
- Previous searches: Google considers previous searches by the individual to deliver a personalized result
- Trends: Google factors in what are other people are searching for
The Google algorithm is more complex than this, but at a minimum, people expect these criteria to weight how results are gathered. Modern digital learners expect content to be personalized or, at a minimum, relevant to their immediate job needs.
In addition, they expect results to appear in an order that is weighted toward relevancy to the question asked—by the individual learner who is searching, not all employees or a “typical” employee.
Finally, the description, title, and core tagging by which results are filtered should mirror how results on the web are shown, allowing learners to select the result that best answers the question.