Your cart is currently empty!
AI Is NOT the Future of Learning & Development

By Dr. Amos Glenn
“You should really be using AI.”
If you’re an instructional designer, you’ve probably heard that from a boss, a vendor, or a gushing article on LinkedIn. The pressure is real. This article is my pushback.
I’m not against technology. I’ve been excited about its potential for learning since I wrote my first educational game in 1982 (you earned a point for every multiplication problem you got right; not much of a game, but I was nine and the computer had 16KB of memory).
I’m pushing back against AI for one reason: AI isn’t the future—it’s the present and the past. We’ve been using the parts that work for years—we just called it “predictive text” or “adaptive feedback” (or “Furby”). What’s being hyped as AI today is usually a loose collection of features looking for a purpose and investors.
Microsoft’s Immersive Reader is a perfect example. It was created at a 2015 hackathon by mashing up existing Microsoft tools. The tech never changed, but in 2019—right when early GPT chatbots were making headlines—it was suddenly rebranded as an “Azure Cognitive Service,” which was later renamed the “Azure AI Foundry.” Same code; new costume.
Educators aren’t here to impress investors. We’re here to build learning experiences that make a difference. To help us cut through the hype surrounding AI, I’d like to share four rules I derived from human-centered design (HCD) to guide my own decisions about using AI (or VR, AR, etc.). This isn’t a new framework or a provocation. The rules are a lens that cross-functional teams can use to focus design decisions on learners, rather than flashy features.
1. Solve the real problem
AI isn’t snake oil, but when something is being pitched as a solution for everything from low engagement to high development costs, it’s prudent to ask, “What really is our problem?” What I’ve noticed is that AI tools typically promise to solve business problems, not learning problems. While both matter, we contribute to the bottom line and the quarterly growth target by solving learning problems: helping people gain the knowledge or skills they need to do something they care about.
Some confusion about the real problem comes from the way we often measure outcomes. Because measuring learning directly is so tricky, we regularly measure proxies instead (such as engagement, satisfaction, and completion). These proxies yield critical insights into the health and efficacy of the learning environment, but are not evidence of learning itself.
What’s counterintuitive is that addressing a proxy directly will likely muck it up. For example, if you try to bump up your engagement numbers by adding an AI chatbot, you no longer know if the bump reflects genuine improvement in the module’s relevance, clarity, and challenge, or just the chatbot’s novelty.
Instead, solve the real problem by finding out what is getting in the way of your learners’ engagement. It’s the difference between solving a business problem with learning and merely improving a business metric.
2. Prefer needs over requirements
Remember the hype around the Segway, the scooter that would reshape cities the way the PC reshaped computing? It had cutting-edge technology, heavyweight backing, and lots of buzz.
The only thing it didn’t have was a reason to buy one. It was an innovative product that nobody needed because the Segway was designed to meet producers’ requirements, not the users’ needs.
AI is similarly an innovation looking for someone to need it. The hype is about how powerful AI is becoming and how you can’t afford not to use the latest AI tools. There is less talk about which needs are being addressed. The result is an ill-defined fear of missing out that is increasingly causing AI to be listed, usually in some nebulous form, among the requirements for new learning products (and job descriptions) without considering the needs of learners.
For educators, though, “considering the needs of learners” is central to our work. Learning occurs in one place and time only: inside the learner, when learning meets one of the learner’s needs.
No requirement can mandate learning. Hopefully, your team can work together to align requirements with needs. However, if you find ambiguity or conflict between the two, verify that the requirement is actually required and not merely a hedge against missing out.
3. Apply systems thinking
The world is more complex than we like to admit. We naturally limit our thinking to simple cause-and-effect relationships: you take an allergy pill and the sneezing stops. But you may also fall asleep at your desk.
One action will have multiple, often unintended, effects. Learning is a complex system. Adding something to a learning experience over here causes something that feels unrelated to pop up over there. Applying systems thinking means including as many of those effects as possible when making decisions.
AI only makes the system more complex, especially when vendors can’t tell you how their AI works. It may help you personalize content for more effective learning, but it may also inadvertently perpetuate the biases in the AI’s training or reduce accessibility.
Complexity is unavoidable, but if you aren’t thinking about the side effects of a feature, you end up playing whack-a-mole with learning problems.
4. Prioritize the audience over the show
This rule is a gut check. Who benefits from this feature: the learner or us? There’s no shame in wanting to create something impressive, but don’t confuse that with the mania surrounding anything “powered by AI.” Designing for affect over effect is just theatrics.
I call this the “Avatar Trap.” Yes, it was ground- and record-breaking, but has anyone ever quoted Avatar to you? Meanwhile, my kids yell, “I’m walking here!” without even knowing Midnight Cowboy exists, and finishing the quote, “Life is like a box of…” is practically a reflex. Spectacle fades no matter how many awards it receives. To make a lasting impression, you need to resonate with the audience.
Learning works the same way. Adding AI-generated variations of ‘correct’ or ‘wrong’ might impress a committee, but not a learner. Microsoft’s Immersive Reader, on the other hand, quickly became the Princess Bride of edtech—no flashy features and no marketing at all, just creators laser-focused on an audience they understood and cared about.
How to use these rules
If your job is solving business problems with learning, the value of your work begins and ends with the change it facilitates in your audience. Human-centered design (HCD) is a mindset for making decisions, whenever possible, based on the needs and values of the human beings we are designing for.
These four rules are a handy HCD lens that cross-functional teams can use to cut through hype and reduce pressure from more peripheral factors. The next time you’re asked to use an AI tool or your team’s discussion turns to using the latest tech, pause and unpack your reasoning by asking yourself and your team:
- Are we addressing the real learning problem?
- Does adding this requirement interfere with any of the learner’s needs?
- Have we considered the effects throughout the system?
- Is this something our learners want, or is it something we want?
There are no simple answers to any of these questions, and that’s okay. The insights you or your team gains by wrestling with them will make you better at your job.
People love to accuse our field of chasing tech fads only to see them fizzle. In reality, we’ve been looting other fields for their tech since the “magic lantern” was used to project anatomy slides in 1685. We don’t need the hype around AI to be excited about its possibilities. When we find a tool that works, we make it work in ways its creators never imagined.
Disclaimer: This content is solely the responsibility of the author and does not necessarily represent the official views of the University of Pittsburgh or the National Institutes of Health.
Image credit: askinkamberoglu