Deeper Design: Putting It All Together

Wewere now at the end of the design work, and it was time to finish thedevelopment. Here is the story of how that effort went.

Development: Prototype, test, revise, repeat

Developmentwasn’t isolated from design, but ran somewhat simultaneously. We were alreadygetting builds that we could play with from the early objectives as we weredesigning the subsequent objectives. We had some interface hiccups, andproblems with Flash output (for the simple reason that Flash output isdecreasingly desirable).

Storyline

Forthe record, the development environment was Storyline. As mentioned in a previous article, we tried out a branching tool, butit had some challenges. Instead, we stuck to Storyline, though that meant usingWord to draft the dialogue and branches, as well as PowerPoint decks. We alsoused PowerPoint to draft the reference content. We were fairly eclectic, thoughif we were cranking out content together we’d centralize on a process (and Iexpect Learnnovators will consolidate what they learned from this experienceinto their own workflow).

Inote that Storyline did leave challenges on delivering the output, as itcontinually was expressed as Flash. That’s not good, as Flash is increasinglyrecognized as a security risk, and most folks are turning it off if notactively banning it. Supposedly we were going to get HTML5 at the end of theprocess, but it wasn’t happening yet. During development, it eventually couldplay on an iPad via the Articulate Player, but on a browser I always had toclick to play Flash (I have a blocker that means I have to expressly invokeFlash).

Testing

Anissue that grew bigger was the need for testing. Once we’d done the firstobjective, I pushed regularly for a test version that we could get feedback on.We finally got there when we already had designs essentially done for all fourobjectives!

Thefeedback we got was very helpful, particularly in pointing out that ourscenario didn’t have enough context, and that the reference material was toolong and not well enough aligned to the task. Smaller problems included someinterface issues. The testing was very worthwhile.

Revision

Thechanges instituted included a “chainsaw” approach to the reference material. Ihave often claimed that I can take out 40 to 60 percent of anyone’s prose(including my own). In this case, I had largely left the prose untouched, andit was time for a change. The number of clicks to see the content went fromover 10 to five in the case of the first objective. Similar reductions wereseen in the reference content for the other objectives.

Wealso considered making the content just one scrolling screen. Initially, wedecided to go with a “reveal” of the content, rather than just the document. Weultimately went back to a single scrolling document for each reference when thetesting revealed a concern with the required clicking to get through thecontent.

Learnnovators: Technically, this was abit of a challenge, since the scrolling panel required placing anchors atdifferent locations that learners can directly navigate to. So we created thisin HTML and embedded it into Storyline.

Scenario and navigation improvements

Wealso had feedback that the scenarios weren’t well set up, both in the overallprose intro to the entire learning experience and also in the prose introducingeach module. Consequently, we introduced small tweaks to address theseconcerns. We elaborated on the intro prose and added prose to each of themodules.

Thescenarios are designed to be somewhat challenging, and the alternatives notobvious. The content is minimal, on purpose. We tried to indicate why thiscontent is important up front, and tap into intrinsic interest, so leaving somechallenge leads to better outcomes if our learners do make the effort. If they don’t, and use more of a trial-and-errorapproach in the scenarios instead, they still should get the “take home” message,but it won’t be as cemented. To paraphrase an old saying, you can lead alearner to learning…

Wedid some interesting navigational choices. We thought being able to track thediscussion would be an interesting option for learners, so you could see arepresentation of the dialogue that had occurred on the path that you took throughthe scenario. We also made an iconic representation of the complexity of thescenario, with a little set of dots that colored as you progressed along. Ofcourse, nodes you hadn’t seen stayed unmarked. The point of this My Pathfeature was to indicate that there were other alternatives.

Respecting the learners

Theemphasis on challenge, on not playing down to the audience, is reflected inother elements, as well. There was some concern in the feedback about theinterface, specifically whether the buttons were obvious. It was a deliberateintent to not do the usual interfaceintroduction, nor the instruction to “click.” I admit to an editorial stance: Ifeel most folks (certainly anyone accessing thiscontent) should be able to figure it out, as it’s been years of the same sortof eLearning model.

Wedecided to not have too many didactic instructions about how to move throughthe learning experience. We did develop simple introductory language explainingour interface evolutions: Reference was the way to access the content withinthe scenarios (and it’s all scenariosand activities); My Chat was your dialogue experience in the scenario; and a menushowed an outline of the whole experience, letting you navigate betweenmodules.

Learnnovators: Itproved a challenge to get the My Path and My Chat options to work withinStoryline, so we leveraged JavaScript for the purpose, writing a series oftriggers to achieve the required functionality.

Targeted examples

Iworried that we didn’t have examples, so I worked with my Internet Time Alliancecolleagues on getting them. Again, the examples were boiled down to theessentials. There was some concern that they were too telegraphic to be useful,but I was definitely shooting for cognitive gaps to require some mental work tomake the connection.

Ialso have to admit that this was developed in a process much slower thannormal. It wasn’t because of the distance so much as this was A) exploratory ina sense, us negotiating shared understandings and also iterating, and B) asideline for both of us. Both partners had things to do to pay the bills,whether conferences or meetings to attend, or deadlines on other projects.

Consistency: Little things mean a lot

Asthe content started coming together, it became clear that there were someinconsistencies between modules. While understandable, it wasn’t desirable. Asa consequence, we took a sustained look at the content as a whole, and we ironedout some difficulties in the overall experience such as language andformatting.

Therewas also a surprise for me: The content had been looking good, but I found thereadability of some text was a challenge. I mentioned it and learned that avisual design team hadn’t gotten engaged yet. Actually, they hadn’t gotten“re-engaged,” as they’d been involved early on, but had been on other thingswhile the content iterated. The result of their intervention was small and yetimportant. They improved the readability, but also provided the resulting “comicpanel” visual style that opened the course and each module, which also hintedat what was to come—a definite benefit conceptually and visually.

The editorial review process: Discoveries

Oneinteresting outcome was the difference in perspectives. There were good, tighteyes on the content on each side at various times. Then, as things cametogether, it became clear that there were inconsistencies in the variouscomponents, and it became useful to step back and take a holistic view. Ultimately,it became worthwhile to grab a screenshot of each activity ending and compare,which turned out to reveal some differences that were worth remedying. Thealternate handoffs and cycles made reviewing an iterative process, but havingdifferent eyes on it at different times increased the quality of the experience.

And,with a deliberate effort to ensure that everyone had a chance to contribute,there were times when I stepped back and didn’t decide to polish too much of whatwas done. It needed to be good enough, but there are times to step back and letsomeone’s different vision take hold, to help them develop confidence. This wasthe premise of the content, and I felt we should practice what we preach.

Output issues: Flash

AsI indicated at the outset of this post, the issues with output continued. WhileHTML5 was both promised as an output format, and desired, it proved problematicto achieve. With Flash being increasingly rejected across browsers for validsecurity reasons, the continuing reliance on it is a continuing concern. Atthis point, our hope is that the final output can be delivered “Flash-free.” Itfinally ended up working on the iPad. Not as fast as it does on a browser, butacceptably.

Learnnovators: It wastricky to get a working output that played well without the Flash plugin. Everytime the course was updated (and it was updated quite a bit, since we wereexperimenting and going back and forth on look and feel as well asfunctionality), we found it a challenge to get a functioning HTML5 course. We reachedout to the Articulate community and did research on our own in parallel. Eventually,we found that the source files had gotten corrupted for no specific reason, sowe ended up redeveloping the course from scratch.

And now, we await your feedback on the result

Overall,we tried to stick to an activity-centric design. We put scenarios and similardecisions to the front, and the reference materials were accessible, but notrequired. We of course worried that this model might mean people wouldn’t readthe reference materials, but we hoped that we’d made the scenarios challenging enough. There were times when thechoices were obvious, and sometimes they were overly ambiguous, but suchsituations can be as well, and we ultimately felt there was a reasonablebalance. We’ll find out empirically how we did!

Share:


Contributor

Topics:

Related