In this era of increasingchange, it’s a given that we’re facing more unique and/or ambiguous situations.The issue then is: how do we cope? Our first line of defense is (or should be) science:specific results or inferences from theory. However, these may fall short. Whatdo we do then? The short answer is experimentation. Experimentation in L&D shouldbe part of the corporateculture.
We prefer to use goodprinciples to determine our actions. There are well-established principles thatcan help us adapt: learning theory, behavioral economics, and factors forinnovation are all relevant. These can guide us to good answers for how todesign, how to solve, how to answer.
(As an aside, my personaltake is that I still see many gapsbetween what we know and what is practiced. Open plan seating,yearly reviews, learning styles, millennials, and more are all still extantdespite being soundly renounced as flawed. We aren’t professional enough yet inour practices!)
Even when we do followscience, however, there are times when it’s not obvious what to do. Thesituation is new enough that there aren’t results yet, or it’s complex enoughthat it’s not clear what frameworks hold. What do we do then? Experiment!
The Cynefin framework
Dave Snowden’s Cynefinframework proposes a set of characteristics. His model has fiveareas: the simple, complicated, complex, chaotic, and disorder. For the firsttwo there are either rote solutions or experts who know what to do. These areknown situations. The following two require different approaches.
For what Snowden characterizesas complex domains—where the right course of action can’t be determined but youhave time to react—he recommends experimentation. When you get to the extremeof chaotic domains, he recommends just trying something and see whatresults.
The complex domain is reallywhere you’re innovating. You make some hypotheses, create a test, evaluate theresults, and iterate until you determine the best course. Should we revise ourinterface this way? Shouldwe use AR here? Let’s prototype and test. We can runan A/B study, or compare new results to a baseline. In many cases,the internet makes this easy with the ability to create different web pageswith relative ease, for instance.
An important outcome is thatwe have a basis upon which to move. However, there are a number of requirementsthat guide the ability to successfully experiment and get useful answers, andit’s worth reviewing them.
Considerations
The first criterion is tomake sure you are asking the right question! For instance, there’s theall-too-familiar “we need a course on this” request, without knowing the realproblem. In this case, we have to be clear what the problem is that we need ananswer to. To put it another way, we need to know what the data will tell us.On principle, you shouldn’t collect data you don’t know what to do with. Ensurethere’s a match between the question and the data you’ll receive from theexperiment.
Then a second criterion is creatinga design where the data will be relevant. Are you testing the right range ofvalues, and with sufficient spacing? For instance, testing learning one weekafter a class, when the performance opportunities come on average every fourweeks, won’t tell you if your learning is working. Are you testing the rightpeople? Are they representative? There are a lot of methodological issues thatmatter. It’s not difficult, but it is important!
And, you need to build someslack into your schedules and budgets to allow for this sort of decisionmaking. If you’re expected to just execute, with no margin for evaluatingdifferent courses, you’ll either cut corners or be off the mark.
Building an innovation culture
There are other entailments, as well. For one,it has to be safe to experiment. Amy Edmondson points this out in her book Teaming: HowOrganizations Learn, Innovate, and Compete in the Knowledge Economy.If it’s not safe to fail, you won’t experiment (or, at least, share thelearning.) Safety in this sense is a critical component of an innovationculture. And yet such a culture is the only sustainable key to coping with theincreasing change.
A second entailment is thebenefit of sharing. The results of our learnings can just stay with us, butthat doesn’t benefit the organization. One of the recurrent problems isdisparate teams making the same mistake because no one’s shared. You don’t wantto reward the failure, but you do want to celebrate the lesson learned. Thiswhere a “showyour work” mentality is so valuable.
We prefer to act ondefensible bases for the success of our teams and our organizations. Empiricaltesting takes a long time but it’s accurate if you’ve used an appropriatemethodology. Don’t reinvent the wheel when you can avoid it, but experimentsare next best. Random or “guess” decisions may be necessary in the worstsituations, but it’s not the way to bet.
Making experimentation partof the L&D corporate culture, with all the entailments, is a step on thepath to a learning organization. And that’s the success path in the long term.Getting there with alacrity is a defensible approach. It’s time to experimentwith experimenting!
Editor’s note
Clark Quinn will beparticipating in The eLearning Guild’s second annual Executive Forum on October 23, 2018. Thisspecial one-day experience, which takes place prior to the Guild’s DevLearn 2018 Conference & Expo, is designed for seniorlearning and development leaders who want to collaborate with peers andindustry experts on cutting-edge strategies that address the key challenges ofthe modern learning organization.









