We were now at the end of the design work, and it was time to finish the development. Here is the story of how that effort went.

Development: Prototype, test, revise, repeat

Development wasn’t isolated from design, but ran somewhat simultaneously. We were already getting builds that we could play with from the early objectives as we were designing the subsequent objectives. We had some interface hiccups, and problems with Flash output (for the simple reason that Flash output is decreasingly desirable).


For the record, the development environment was Storyline. As mentioned in a previous article, we tried out a branching tool, but it had some challenges. Instead, we stuck to Storyline, though that meant using Word to draft the dialogue and branches, as well as PowerPoint decks. We also used PowerPoint to draft the reference content. We were fairly eclectic, though if we were cranking out content together we’d centralize on a process (and I expect Learnnovators will consolidate what they learned from this experience into their own workflow).

I note that Storyline did leave challenges on delivering the output, as it continually was expressed as Flash. That’s not good, as Flash is increasingly recognized as a security risk, and most folks are turning it off if not actively banning it. Supposedly we were going to get HTML5 at the end of the process, but it wasn’t happening yet. During development, it eventually could play on an iPad via the Articulate Player, but on a browser I always had to click to play Flash (I have a blocker that means I have to expressly invoke Flash).


An issue that grew bigger was the need for testing. Once we’d done the first objective, I pushed regularly for a test version that we could get feedback on. We finally got there when we already had designs essentially done for all four objectives!

The feedback we got was very helpful, particularly in pointing out that our scenario didn’t have enough context, and that the reference material was too long and not well enough aligned to the task. Smaller problems included some interface issues. The testing was very worthwhile.


The changes instituted included a “chainsaw” approach to the reference material. I have often claimed that I can take out 40 to 60 percent of anyone’s prose (including my own). In this case, I had largely left the prose untouched, and it was time for a change. The number of clicks to see the content went from over 10 to five in the case of the first objective. Similar reductions were seen in the reference content for the other objectives.

We also considered making the content just one scrolling screen. Initially, we decided to go with a “reveal” of the content, rather than just the document. We ultimately went back to a single scrolling document for each reference when the testing revealed a concern with the required clicking to get through the content.

Learnnovators: Technically, this was a bit of a challenge, since the scrolling panel required placing anchors at different locations that learners can directly navigate to. So we created this in HTML and embedded it into Storyline.

Scenario and navigation improvements

We also had feedback that the scenarios weren’t well set up, both in the overall prose intro to the entire learning experience and also in the prose introducing each module. Consequently, we introduced small tweaks to address these concerns. We elaborated on the intro prose and added prose to each of the modules.

The scenarios are designed to be somewhat challenging, and the alternatives not obvious. The content is minimal, on purpose. We tried to indicate why this content is important up front, and tap into intrinsic interest, so leaving some challenge leads to better outcomes if our learners do make the effort. If they don’t, and use more of a trial-and-error approach in the scenarios instead, they still should get the “take home” message, but it won’t be as cemented. To paraphrase an old saying, you can lead a learner to learning…

We did some interesting navigational choices. We thought being able to track the discussion would be an interesting option for learners, so you could see a representation of the dialogue that had occurred on the path that you took through the scenario. We also made an iconic representation of the complexity of the scenario, with a little set of dots that colored as you progressed along. Of course, nodes you hadn’t seen stayed unmarked. The point of this My Path feature was to indicate that there were other alternatives.

Respecting the learners

The emphasis on challenge, on not playing down to the audience, is reflected in other elements, as well. There was some concern in the feedback about the interface, specifically whether the buttons were obvious. It was a deliberate intent to not do the usual interface introduction, nor the instruction to “click.” I admit to an editorial stance: I feel most folks (certainly anyone accessing this content) should be able to figure it out, as it’s been years of the same sort of eLearning model.

We decided to not have too many didactic instructions about how to move through the learning experience. We did develop simple introductory language explaining our interface evolutions: Reference was the way to access the content within the scenarios (and it’s all scenarios and activities); My Chat was your dialogue experience in the scenario; and a menu showed an outline of the whole experience, letting you navigate between modules.

Learnnovators: It proved a challenge to get the My Path and My Chat options to work within Storyline, so we leveraged JavaScript for the purpose, writing a series of triggers to achieve the required functionality.

Targeted examples

I worried that we didn’t have examples, so I worked with my Internet Time Alliance colleagues on getting them. Again, the examples were boiled down to the essentials. There was some concern that they were too telegraphic to be useful, but I was definitely shooting for cognitive gaps to require some mental work to make the connection.

I also have to admit that this was developed in a process much slower than normal. It wasn’t because of the distance so much as this was A) exploratory in a sense, us negotiating shared understandings and also iterating, and B) a sideline for both of us. Both partners had things to do to pay the bills, whether conferences or meetings to attend, or deadlines on other projects.

Consistency: Little things mean a lot

As the content started coming together, it became clear that there were some inconsistencies between modules. While understandable, it wasn’t desirable. As a consequence, we took a sustained look at the content as a whole, and we ironed out some difficulties in the overall experience such as language and formatting.

There was also a surprise for me: The content had been looking good, but I found the readability of some text was a challenge. I mentioned it and learned that a visual design team hadn’t gotten engaged yet. Actually, they hadn’t gotten “re-engaged,” as they’d been involved early on, but had been on other things while the content iterated. The result of their intervention was small and yet important. They improved the readability, but also provided the resulting “comic panel” visual style that opened the course and each module, which also hinted at what was to come—a definite benefit conceptually and visually.

The editorial review process: Discoveries

One interesting outcome was the difference in perspectives. There were good, tight eyes on the content on each side at various times. Then, as things came together, it became clear that there were inconsistencies in the various components, and it became useful to step back and take a holistic view. Ultimately, it became worthwhile to grab a screenshot of each activity ending and compare, which turned out to reveal some differences that were worth remedying. The alternate handoffs and cycles made reviewing an iterative process, but having different eyes on it at different times increased the quality of the experience.

And, with a deliberate effort to ensure that everyone had a chance to contribute, there were times when I stepped back and didn’t decide to polish too much of what was done. It needed to be good enough, but there are times to step back and let someone’s different vision take hold, to help them develop confidence. This was the premise of the content, and I felt we should practice what we preach.

Output issues: Flash

As I indicated at the outset of this post, the issues with output continued. While HTML5 was both promised as an output format, and desired, it proved problematic to achieve. With Flash being increasingly rejected across browsers for valid security reasons, the continuing reliance on it is a continuing concern. At this point, our hope is that the final output can be delivered “Flash-free.” It finally ended up working on the iPad. Not as fast as it does on a browser, but acceptably.

Learnnovators: It was tricky to get a working output that played well without the Flash plugin. Every time the course was updated (and it was updated quite a bit, since we were experimenting and going back and forth on look and feel as well as functionality), we found it a challenge to get a functioning HTML5 course. We reached out to the Articulate community and did research on our own in parallel. Eventually, we found that the source files had gotten corrupted for no specific reason, so we ended up redeveloping the course from scratch.

And now, we await your feedback on the result

Overall, we tried to stick to an activity-centric design. We put scenarios and similar decisions to the front, and the reference materials were accessible, but not required. We of course worried that this model might mean people wouldn’t read the reference materials, but we hoped that we’d made the scenarios challenging enough. There were times when the choices were obvious, and sometimes they were overly ambiguous, but such situations can be as well, and we ultimately felt there was a reasonable balance. We’ll find out empirically how we did!