Navigating the marketplace for learning technologies and finding the right tools to fit the specific needs and strategy of an organization has always been a challenge. Now, with the rapid deployment of generative AI in the form of Large Language Models (LLMs), this challenge has become more difficult.
So how should organizations approach the buying process, and what questions should they be asking vendors about deploying AI in their solutions? And more importantly, what to make of the answers?
1. What kind of AI is being deployed, and how does it add value?
In order to help us understand the answer, we need to first be able to distinguish between generative AI and non-generative AI engines at work.
- Non-generative AI refers to AI models that do not generate new content, but instead process, analyze, or categorize existing content and data.
- Generative AI refers to a type of artificial intelligence that can create new content, “generating” outputs rather than simply classifying or categorizing input data.
Examples of non-generative AI are:
- Content management engines that automate or support tagging, sorting, and managing existing learning content
- Career pathway modeling engines that assess an individual's current skills, past performance, and career goals, then suggest potential career paths and the necessary training to achieve them
- Learning recommendations engines that recommend courses, webinars, or resources based on an individual's past learning history, job role, and corporate objectives
- Adaptive learning engines that analyze an employee's learning journey in real-time and adapt the delivery of individual pieces of learning content accordingly
- Learning analytics engines that analyze vast amounts of learning data, providing insights into areas like course effectiveness, areas of improvement, and prediction of future training needs, for individuals as well as groups
- Skills engines that build and maintain skills frameworks by identifying, categorizing, and relating various skills and competencies for jobs, roles, and training paths
Examples of generative AI, which tend to be LLM-based engines, are:
- Custom course content engines that produce learning content, i.e., text, in the form of summaries, bullet points, questions, flashcards, and scenarios
- Content update engines that suggest updates or modifications to existing learning content based on feedback, updates in products or services, or research, ensuring materials remain relevant
- Chat engines that provide performance support and learning opportunities, in the moment of need, on a range of topics, in conversational style
- Automated feedback engines that generate personalized feedback for learners based on their performance, highlighting areas of strength and where improvement is needed
- Training scenario creation engines that create realistic training scenarios, for example for sales or customer service training
- Learning object creation engines that produce elements such as generated videos, graphics, or audio files
2. What data is the AI trained on? What data does the AI have access to?
These two questions are key in order to gain a better understanding how a specific technology or product utilizes generative or non-generative AI engines and the quality of outcomes one can expect.
For generative AI engines, it is important to know what data the engine is trained on. LLMs are fantastic communication engines, but they are useful only if they are backed up by a strong "knowledge engine." An LLM like GPT, for example, will largely be trained on data "scraped" from the web. A general guideline as to whether this is sufficient to fulfill the tasks in question is to ask whether the average quality of internet content is good enough to provide a reliable and useful answer. In many general cases, this may suffice. In very specific cases, this may not suffice at all.
If additional training is required, for example on information or processes very specific to an organization and its products and services, then it is important to scope out two things:
Firstly, what data, content, or information is available to further train the engine and to gain an understanding of how much extra data and content are required to lead to sufficiently reliable outcomes
Secondly, how access to the knowledge engine would be provided, e.g., through upload to the cloud—or to an instance of the AI sitting within the organization, behind the firewall.
For non-generative AI engines, these data and data-hosting questions are also key, albeit from a slightly different angle. Since access is essential in order for the engine to process and analyze the data, it is important to understand where the data is hosted: In the cloud via upload, or internally through a local instance and behind the company firewall?
And just as in the generative AI case, one must also ask: Is there enough data and content for the AI to do a good job? In order for an AI to organize content aligned with skills, for example, an organization needs to have access to enough information on jobs, roles, and skills to map to, as well as enough content to map in the first place.
With an in-depth discussion of the above, we are able to gain a solid understanding of the foundation required for each use case and to build a project powered by AI that delivers.
3. What are the strengths and what are the limitations of the engine?
This question is related very closely to the previous one, and a natural follow-up point of discussion when talking about knowledge engines, AI automation, available data, and quality of the data.
Every AI engine, be it generative or non-generative, has strengths and weaknesses—just like any tool. GPT is very strong on (knowledge based) Biology Olympiad questions, but under-performs on (mental-model based) Physics or Chemistry Olympiad questions.
An open conversation with vendors about strengths, weaknesses, and limitations is vital in order to establish a mutual understanding of not only the use case in general, but the deliverables and outputs in particular. A model apparently without strengths, weaknesses, and limitations, one touted as simply "delivering" what the client is looking for, should be a red flag in the buying process.
4. How is my data 'ringfenced' within the AI engine?
In the second question above, we discussed instances of an organization's AI training data as well as the data or content the AI will require access to in real-time to perform. That focus was on how the AI gets this information in the first place.
Here, the question is one of where the AI stores the data it is training from or working with. These fall into two categories: global and local instances.
An example of a global instance is one where while I am interacting with the system, the system is also learning from my—and others'—interactions with it. Thus, the information shared with such a system, for example, could be stored within the system and utilized as part of an output or recommendation for a different user at a different point in time.
Safeguards are being worked into systems to improve privacy and trust, but a detailed conversation with each provider is necessary to identify what data is being used from the user, and how.
In contrast, a local instance can be understood as a separate "version" of the AI for each client. This means that the vendor would be deploying separate instances of their AI to each client, with no overlap or connection to some "centralized" AI system.
This question is key, as providers differ considerably in their ability to provide ringfenced, local instances that sit behind an organization’s firewall. This may change quickly in the upcoming months, but this is something to keep a keen eye on, depending on your organization’s needs.
Lastly, the vendor may also provide content of their own, for use with all clients. How would this be integrated? And, where is this stored? are natural follow-on questions in this trail of thought.
5. Who will have access to or control over tools enhanced by AI?
Key considerations here include what those within an organization can access and control, and what is behind the scenes or reliant completely on customer support.
Now that we have hopefully gained a solid understanding and confidence from the vendor as to what types of AI are being deployed, how they are trained, what data they work with, how the data is accessed, and how data is ringfenced, the final question is about the interaction with the AI features.
This will of course vary greatly, depending on the type of technology or product and the specific use case. Therefore, without being too prescriptive, we aim to give more of a "flavor" as to what to aim for in this context. Interesting things to ascertain are:
- Who interacts with the AI, and how? For a chatbot, the user will obviously interact with the AI. But how would training the tool work?
- Who can run updates on the training, as they become available? For example, when an organization improves their offering or launches completely new products and services, how are these integrated into the tool?
- Who controls the training set, and how?
- How are updates run? How long do they take?
- Who has access to analytics?
- How much can be done in-house, and which aspects require customer service involvement?
Often, leading vendors will already have well-defined terminologies and user profiles, for example: learner, user, admin, super-admin, etc. It is crucial to ask for and go through these to gain a better understanding of what deploying the technology would look like for each profile, as well as the distribution of the roles and responsibilities when the system is up and running, requiring upkeep and tweaking.
Another area is on content: Is AI used to develop content in the vendor’s catalog? Are those content creation tools accessible to course creators within your organization? Can content creators or admins override what is generated by the AI? If so, how does this look in practice?
And last but not least: What does testing look like, regardless of whether the solution deploys generative or non-generative AI?
With the rapid influx of AI tools, it has never been more important to be an informed buyer. Asking the right questions helps to ensure you are working with a knowledgeable and experienced vendor who has a deep and clear understanding of the role and types of AI in the solution they are providing.
It might be possible that the first person you talk to at a vendor doesn’t know all the answers, which is understandable. It is worth seeking out the right team or individuals to talk with, in detail, to ensure you can confidently choose the AI-powered technology that fits your needs best.
Here’s to fruitful conversations with vendors and providers!
Join us at the AI & Learning Symposium!
To learn more about selecting and using AI technologies in an L&D context, join the AI & Learning Symposium on October 24 and DevLearn 2023 Conference & Expo, October 25–27 in Las Vegas. Don’t miss sessions with Markus Bernhardt—and other AI experts!