In March 2022, we explored how advances in artificial intelligence (AI) were promising great things for learning in our article The Future of ID in an AI World.

In light of the recent excitement around the ChatGPT3 release, we want to explore how far the initial excitement around these advances in Natural Language Processing (NLP) is justified from a learning perspective and what potential impact we see these advances could have on learning & development (L&D) and on instructional design (ID).

Previously, we explored two specific angles of AI technology potentially advancing learning: Firstly, the ability to build programs of learning fully automatically, but from existing, high-quality content. Secondly, the ability to deliver these programs to learners via individually personalized journeys.

This continues to provide a huge opportunity for both learners and L&D teams, as more and more teams explore these capabilities. Automated mapping and contextualization removes the need for the arduous task of building learning paths into courses by hand, while adaptive learning engines can guide the learner through this network of content like a learning GPS, saving time and improving efficiency, while providing hitherto unseen learning analytics.

We concluded, and our position persists today, that high-quality experience design is and remains the gold standard for learning. This comes from having a rigorous basis: good data from experts combined into cogent explanations in solutions that leverage media.

Enter Chat GPT3

Into the mix, OpenAI has now thrown ChatGPT3, which has been receiving a lot of attention across both news and social media. People have been busy testing and playing around with the chatbot, asking it to write articles and reviews. L&D professionals have also shared their experience with the system when asking it about the challenges L&D faces now and going forward, and even asking it about its very own impact on L&D’s future, with varying degrees of “‘success.” 

It seems that the engine is able to come up with some well-written answers and articles, but users immediately point out that answers can include references and statements that are outright wrong or even contradict themselves within the same piece. An excellent example and case in point, for our sector at least, are much-discussed references to learning styles that the engine makes when requested to output articles and answers around learning.

The fundamental issue

And here lies the crux of the matter. Before we look at the output, we need to look at the ChatGPT3 engine and what it actually does. Initially, to be able to interact at all, an AI engine has to be trained. In the case of ChatGPT3, the engine has been trained on vast amounts of digital text taken straight from the internet.

This text forms the foundation of what the engine “knows.” The engine can now utilize this body of training text to search for answers or to produce articles and summaries, and it can even produce these written in a certain “style,” copying approaches it has been fed as part of the training. 

This latter capability has led to a lot of fun and humor around testing the engine, arguably most notably a user who prompted it to “write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR.”

While we might initially agree that this is fun or impressive, we have to ask ourselves: Is it actually useful? 

Chatbots mirror their ‘training’ data

One thing we need to be very clear on is that the engine is, in essence, simply a sort of mirror of all the content it has been provided. This means that the engine uses all the texts, information, and styles it has been given, building them into an output to respond to the query or request that a user has put in. Imagine a set of mirrors, reflecting the texts and styles we have input initially back toward us.

While impressive, the issue here is that these mirrors can only reflect the input back at us—including the facts, but also including all the opinions and biases that we have provided it with. 

This is a known problem. AI engines require good initial content. Countless times, the use of legacy data has demonstrated how much previous work on AI algorithms included harmful biases.

To explore this further, imagine we gave the chatbot only The Hitchhiker’s Guide to the Galaxy. All the chatbot would be able to do would be to reflect back to us the world described in this fictional work created by Douglas Adams—including the Vogons, a fictional alien race from the planet Vogsphere. That would be all it would “know.” To the chatbot, Vogons would be a thing. Fact. Not fiction.

Implications for AI-aided ‘learning’

The implications for any form of “learning” utilizing this technology are therefore straightforward: If we want to avoid coming across the Vogons in our learning journeys, we must ensure that the input to any AI engine we use is carefully created and curated. 

Such good initial content might be readily available for general topics, the things high-school essays are made of, but for subject-matter expertise and content specific to an organization or its products, services, and clients, we are in the same place that L&D professionals have always been: We require high-quality, human-grade content creation and curation.

Even if such creation, curation, and vetting were indeed possible using AI engines, what would this mean for learners and for designers, respectively?

Limitations of the chatbot

While the ChatGPT3 chatbot can provide answers to questions, it is limited to plain text. In comparison, intelligent search functionality based on AI contextualization engines is already capable of providing high-quality search results. These point the user directly to specific sections within existing content pieces and can include all forms of content: documents, slides with images and illustrations, audio, and video. 

What the chatbot does add on top of existing intelligent search functionality is its ability to summarize or revise content and present it using a “friendly” user interface. While impressive, this is not yet progress enough to get excited about when it comes to learning.

ChatGPT3 is a communication engine, but it needs a knowledge engine behind it. Getting that knowledge engine to be accurate is still a human endeavor.

Chatbot-driven learning isn’t here — yet

For learning and skills development, what the chatbot offers simply falls far short of L&D requirements. 

For a start, the cognitive theory of multimedia learning states that for effective learning to take place, we require subject matter expertise as well as effective, evidence-informed learning strategies. We know that such learning strategies should include dual-channel processing—both text and images—and that is only the starting point.

While there are also interesting experiments in AI-generated images, it’s not clear yet whether AI can create diagrams to support learning, and to do so would still require accurate content.

We also know that learning needs to be put into practice, and that learners need to utilize reflection, spaced practice, repetition, and recall to truly learn and retain information and skills.

Looking to the future 

Regarding the potential impact on the future of L&D, and in particular the role of the ID, we argue that while high-quality and in-depth content can be processed and revised by the AI, it cannot be written by the AI. The original content suite has to be curated or created, which requires content and design expertise.

While the AI engine might be able to help us identify gaps in our content, we’d still require that human expertise, external to the system, to close the gaps. AI can generate questions as well as answers, but the output needs thoughtful, critical review to weed out nonsense questions and responses that are generated.

To go farther, we should not limit our considerations to pure subject matter expertise; we also must consider the deployment of effective multimedia learning design principles and resulting learner experiences. While AI can remove some tedium, it still requires talented humans to create and curate the content, review the output, generate meaningful practice, and tie it all together into a learning experience. That’s likely to remain true for quite a while, despite advances—and we think that’s to the good. 

The best outcomes come from the right partnership of human and technology, whatever the technology is, including AI. So here’s to good partnerships.