Learning Solutions 2016 Reflections: Why Observation and Sharing Have to Scale

It wouldn’t be surprising if the attendees of this year’sLearning Solutions Conference & Expo and Eco 2016 looked at the theme forthe show (“Sharing What Works”) and thought to themselves, “Right, the speakersare sharing effective strategies, we as attendees are sharing our own—yeah, Igot it.”

But what if “Sharing What Works” is far more than adisposable, three-word tagline? What if it’s literally the key to future success in a rapidly changing L&D industry?

Clark Quinn said at the show, “L&D has the opportunityto be the most valuable part of your business.” And as controversial orfar-fetched as that may sound, he’s absolutely correct—and moving the bulk ofobservation and sharing outside of training is how you get there.

Bill Nye kicked off Learning Solutions 2016 with a frenetickeynote that dove into everything from WWII codebreaking to mechanicalengineering in the 1970’s to the future of space exploration. But it was hisstatement, “Learning starts with observation,” that I found more powerful thanany of the amazing scientific breakthroughs that he shared.

How much observation do we have into the reasons aneLearning course was designed the way it was, or the reasons it was produced atall? What hindrances to availability prevent that course from being observableand consumable by its learners? How much observation does the business side ofyour organization have into how training’s efforts are truly impacting thebusiness?

Many of the week’s speakers who followed Nye’s keynote (notan easy feat!) shared familiar challenges to scaling up observation, and somegreat ways to overcome them.

For observability at a granular level, Carrie Ann Desnoyersand Jennifer Hendryx tackled the challenges around making learning objectsreusable, interoperable, durable, and accessible. As yesterday’s bulkyeLearning courses are replaced with easier-to-consume microlearning units, itwill be increasingly important to incorporate all of the aforementionedlearning-object characteristics into their original design. By doing so, yourlearning objects can be reused in a variety of courses, interoperable on anyLMS, and ultimately accessible whenever and wherever designers, instructors, and learners need them.

Going from learning objects to looking at increasing theaccessibility of entire courses, I had the chance to attend Nina Talley’s conferencesession and to interview her about her role in providing technical support forvirtual, instructor-led training deliveries around the globe—often tolow-bandwidth areas. Nina shared not only the familiar headaches caused bylatency and bandwidth constraints, but also the responsibility that L&Dshoulders for making sure that learners are equipped to troubleshoot issues byworking with them long before a virtual class ever begins, or even before thedeveloper created the virtual class.

And this is just the start of the observability that we’relooking for, and what really amounts to what I’m calling, “Sharing What Works V2.0.”So many Learning Solutions speakers encouraged us in the L&D field to getout of our comfort zone, cast aside our silo, and to then align ourselvesalongside everyone from IT (yes, even IT) to product management, to sales, and tobring the business and customers into the eLearning design process.

This isn’t just the L&D industry sharing what solely worksfor training. This is L&D collaborating with other departments in anorganization and sharing what works for them. According to conference speakerClark Quinn, too few L&D professionals are leveraging that kind of feedbackand data in order to incorporate business impact in design. Doing so eliminatesworthless and unjustified assumptions about the level of business impact that trainingwill make. Instead, this will greatly increase the probability and expectationsof that impact being delivered every time.

“We aren’t doing a good enough job,” said Quinn. “Only fivepercent of L&D organizations excel at using data to align and runefficiently.”

When Quinn asked attendees to name some of the metrics appliedat their own organizations, his point was proven and immediately hit home formany. Participants named all-too-common metrics around “Pass or fail,” “Numberof courses created or completed,” “Time spent developing courses,” “Likes andshares.” And while these metrics can lead to insights around efficiency, theyonly speak to knowledge around the efficiency of L&D alone, and offer zeroinsight around efficiency increases of the learners themselves. They provide noproof of business impact being made by anyone.

Quinn suggested that the measurements that L&D should focuson, and focus on first, are those that reflect actual business impact. How longdoes it now take to close a sale? Has your code quality improved? How quicklycan you now resolve defects? Are your call centers able to close supporttickets with less escalation? This feedback doesn’t come from learners sharinghow much they enjoyed a course, or how quickly they were able to locate it orcomplete it. This feedback comes from non-learner sources such as the xAPI and thetools learners use, from learners’ managers, and from customers directly.

The goal here goes far beyond getting learners—and learningprofessionals—to share what works for them, and focusing only on meeting learners’expectations. “Sharing What Works,” more times than not means getting thefeedback from everyone outside of training to find what measurements they’relooking to improve, and designing classes and eLearning accordingly.

I can’t speak for The eLearning Guild, but I have to assumethat “Sharing What Works” was in no way intended to be limited to the confinesof the conference, or to be kept private from anyone outside of L&D. And Ihope that training does become themost valuable part of many organizations. Company-wide collaboration will beL&D’s key to making it happen—how many of us are already on the way? I’dlove to hear your story.

Share:


Contributor

Topics:

Related