New technologies are always appealing; but they run that risk that we’ll end up missing the point and going astray. In particular, I want to challenge one approach that I fear is being misinterpreted and therefore potentially misapplied. In short, answering questions or getting them answered using a chatbot isn’t the same thing as learning.

The underlying point is that our intelligence augmentation (IA) has to be carefully aligned with what we know about how we think. If we misapply a technology, we can undermine the desired intention. As such, we need to know both the technology and learning. (That, by the way, is the underlying agenda of learning engineering!)

Chatbots

Chatbots are a computer science technique that emerged from artificial intelligence (AI) research into representing and using language. They use a variety of techniques to create linguistic agents that can answer your questions. Whether using audio/voice or just text, these agents can provide useful interactions.

Depending on the nature of the need, different underlying technologies can be used. They can do sophisticated natural language processing on top of a deep semantic model, or do more simplistic keyword matching. With the right narrowness of domain and likely needs, the latter can be easy to develop and meet a number of needs. And the ability of systems to parse text and generate semantic models is getting more powerful. Ultimately, we can build them.

So if we can, should we? And if so, when? What needs can they serve? And what would be times when we shouldn’t use them?

Applied chat

For organizations, a wide variety of chatbot uses are being seen. While they started out in customer-facing roles, reducing customer service overhead, they’ve also been recognized for internal efficiency. Obviously, they’ll be used where money can be saved and efficiency improved.

The core reason for chatbots is to answer questions. And, in narrow domains simple approaches suffice. In automated phone menus, they’re just trying to determine whether to send you to sales or support. These are relatively easy to build. For more complex areas, like medical systems, they need to be more nuanced about just what is being asked. And, for those complex situations, more powerful solutions are in order.

For many internal learning or performance questions, chatbots can be useful. For instance, policy documents can be parsed and then questions can be answered by a chatbot instead of taking time to answer yet again the same question about vacation policy. And they might answer questions about products in the moment, ala Siri, to ensure the right answer is provided anywhere in the company, including creating training. Even in more complex domains, they could be built to answer questions about a particular technology being used, for instance.

One exciting new area is that these systems can now parse materials and come up with questions, not just answers! From performance support, we are now moving into real formal learning. (Not to say that it couldn’t be a personal tool for informal learning, as well.) That is, the systems can know something and ask the learner questions to see if they’ve comprehended the material. And this is an important part of learning. But it’s not sufficient. Yes, having the knowledge is an important component of learning but it’s not all.

Chats are, and aren’t, learning

Having the necessary knowledge is, we know, a useful precursor to learning. Moreover, it can be difficult to generate the necessary knowledge assessments successfully, let alone cost-effectively. So, if you have coherent content (and you should), you can automatically generate your knowledge assessments.

But that alone isn’t going to lead to meaningful behavior change. Knowledge alone isn’t going to be the necessary game-changer in your organization. What’s needed to successfully develop new skills is to apply that knowledge in context and get feedback. So you need the knowledge, but then you need to apply it. Meaningful practice is key.

Chatbots have many valid uses. They can be used instead of an FAQ. They can be valid support for answering questions. And they can ask questions, too, to help you get the basic knowledge down. Or, if you’re learning to answer questions, a chatbot that asks them is good practice.

However, if you’re learning to make complex decisions, you need to practice them in an appropriately challenging environment. That includes the context, with plausible alternative decisions that represent reliable misconceptions, deliberate complexity, etc. This is the role of the learning experience designer, and no AI technology is ready to do that yet. I argue that transformative experience design is a core skill for the future, riffing off of Pine & Gilmore’s The Experience Economy. (Even if AI can do it, I suggest we want to reserve that for ourselves.)

Too often, our organizational learning approach has been an information dump and knowledge test. And chatbots can do that. However, that’s not what’ll make a real difference in organizations. As we wake up to the necessity for more meaningful outcomes of our learning investment, we need to recognize the core importance of deliberate practice in contexts that are of importance and interest to learners. That’s when knowledge suddenly is really meaningful. And then we have a role for chatbots!

From the editor

Clark Quinn will lead a full-day pre-conference workshop prior to DevLearn 2019 Conference & Expo. The workshop, “Learning Experience Design: Integrating Engagement and Learning Science,” will take place Tuesday, October 22.