In 2022, several global companies invested heavily in metaverse-based learning programs. Months later, priorities shifted as generative AI entered the scene, reshaping digital learning priorities overnight, and many of those metaverse initiatives were shelved. The cost? Millions of dollars and countless hours. These kinds of missteps aren’t just about budget; they erode trust and slow progress—and they happen more often than we admit.

No one could have predicted exactly how quickly AI would reshape the learning landscape, but the situation raised a familiar challenge: How do we make smart decisions in a world where priorities shift fast?

That’s where agile experimentation comes in. While it can’t prevent change, it allows L&D teams to move fast, reduce risk, and make better decisions based on real insight, not just assumptions.

A smarter way to learn what works

Agile experimentation offers L&D teams a powerful way to validate ideas before broad implementation. It involves running small, focused tests to gather evidence. This practice allows for rapid iteration, minimizes risk, and ensures that decisions are based on real insights, rather than assumptions.

Take personalized feedback as an example. Before building AI-generated feedback into all learning journeys, a team could test it in a single leadership module. Learners would receive a short AI-generated message after completing a skills exercise. The team could then compare perceived usefulness and emotional tone to human-written feedback or no feedback at all.

By employing quick feedback cycles and low-risk pilots, teams can effectively determine what truly works, leading to significant savings in time, money, and credibility.

Here’s how to do it well.

Start with a strong hypothesis

To design an effective experiment, the first step is knowing what you aim to test. A well-defined hypothesis sharpens this focus to a single, clear idea, such as content, delivery method, or framing. This often takes the form of an "if... then..." statement.

For example, in the personalized feedback pilot, the hypothesis might be “If learners receive immediate, personalized AI-generated feedback, then they will show higher quality skill application in a follow-up task compared to those receiving human or no feedback.”

These hypotheses can be evaluated in contexts such as a pilot group, recurring sessions, or internal communication channels.

Measure what matters

Once a test is underway, rigor matters. That doesn’t mean complex dashboards or advanced analytics; it means being clear about what you’re testing and how you’ll know if it worked.

Use simple, consistent tools like pulse checks, quick polls, or before-and-after snapshots. Define your success measures from the start, and resist the urge to change them mid-test.

Even small experiments can provide credible insights if the data is focused and trusted. Try not to overwhelm stakeholders with too much information. Clarity and consistency go a long way.

In the AI feedback test, success measures could include learner ratings of usefulness and tone, behavior change in a follow-up task, or even preference between human and AI responses.

Bring stakeholders in early

The success of any experiment often hinges on the people around it. Loop in stakeholders from the beginning by sharing what you’re testing, why it matters, and how you will measure success.

This early alignment builds trust, makes the process more transparent, and ensures your findings are more likely to be acted on. Be ready to answer questions about your data and methods. Even if your tools are simple, thoughtful design builds credibility.

One effective way to make stakeholders feel invested is to involve them in the process. For instance, you might include in learning designers and faculty early to review AI feedback drafts. Their reactions will quickly highlight tone, bias, or relevance issues, and shape whether the tool can be trusted at scale. This feedback loop also helps refine AI-generated responses into feedback that learners can actually use.

You'll also want to build a team of champions. These are individuals who will advocate for your experiment, help remove any roadblocks, and generate enthusiasm. Imagine a manager who sees their team's engagement soar with a new approach. They can become a powerful champion by spreading the word.

Finally, always be ready to answer questions about your data and methods. Even if your tools are simple, thoughtful design builds credibility.

Navigating common challenges

Even with a clear approach, agile experimentation comes with common challenges:

  • Small sample sizes: A limited number of participants can make conclusions unclear. Focus on the direction of results and look for meaningful trends even if they’re not statistically significant. Qualitative feedback can also provide valuable insights.
  • Competing priorities: Stakeholders may want to test multiple things at once, which can muddy the outcome. Recommend a clear prioritization, starting with experiments tied to urgent business needs or high-impact areas.
  • Pressure for quick results: Stakeholders may expect fast answers, but insight takes time. Set realistic expectations early, and share updates regularly. Visualizing early trends can help manage pressure. In the AI feedback test, even early learner quotes, like, “It felt like someone was cheering me on” or “The feedback was weirdly off” can help build support or highlight red flags before scaling.

Agile experimentation helps L&D teams move forward with clarity and confidence. In a world where priorities shift fast, from metaverse to AI, agile experimentation helps teams stay responsive and focused on what truly works.

By starting small, testing quickly, and using reliable data to guide decisions, teams can reduce risk, adapt to change, and build stronger support. Even with limited resources or pressure for quick results, a focused approach helps uncover what works and what doesn’t without needing perfection.

When experiments are aligned with organizational goals and clearly communicated, they can turn ideas into action and insight into impact.

 

Image credit: Flashvector

 

All Contributors

Gia Fanelli

Learning Design Expert, McKinsey & Company

Bob Kaeding

Learning Design Expert, McKinsey & Company

Suzy Robertson

Learning Design Specialist, McKinsey & Company