Is failing to learn, in itself, a useful path to learning?
In their 1991 book Art and Fear, David Bayles and Ted Orland tell the story of a ceramics teacher who split his class into two equal groups. The first group was to be graded on the quantity of pots they produced, while the second group would be graded on quality—and their ability to produce a “perfect” pot.
When it came time to grade, students were surprised to discover that, in the process of producing many pots, the “quantity-focused” group actually produced better quality pots than the group that had perfection as their goal.
It’s a memorable story about the power of rapid experimentation over perfectionism. And it’s partly memorable because it’s so alien to our culture, where failure is a dirty word.
Psychological safety
Failure has been at the heart of my research around learning agility for individuals and organizations.
There’s been interesting research around the role of games in encouraging experimentation and a playful approach to problem solving. But perhaps the most compelling findings came from Google in their investigation of high performing teams during their Project Aristotle initiative.
Named for Aristotle’s quote that, “The whole is greater than the sum of its parts,” this project represents a robust data-driven investigation into what makes teams work. One of their key findings was that psychological safety was central to high performance.
Psychological safety has been championed by the likes of Harvard professor Amy Edmondson, and involves providing an environment where people feel respected, are heard, and are encouraged to share their work, their challenges, and yes … their failures.
The alternative to psychological safety is an environment of blame and shame, where there is no place for vulnerability or honest sharing.
The Blame Game
In 2009, academics Nathanael Fast and Larissa Tiedens conducted an experiment where they divided a test group into two cohorts. They provided each cohort with a different article about then-California Governor Arnold Schwarzenegger’s failed attempt to pass several propositions.
In the first article Schwarzenegger took full responsibility for the fail. In the second version he blamed political partisanships and special interest groups. Interestingly, the group that read the blame article were much more likely to blame others for failures in their subsequent exercises.
This work seems to confirm what you probably thought intuitively—that blame is extremely contagious and particularly influenced by leadership examples. Fortunately, there are a number of strategies that can shift blame, which are summarized in Figure 1 from the Learn2Learn app.
Figure 1: How to end the blame game
Return on failure
One of the most useful models I’ve found in relation to failure was put forward by Julian Birkinshaw and Martine Haas in a 2016 article. I’ve slightly adapted their definition below.
Figure 2 : Return on Failure formula developed by Birkinshaw and Haas, used with permission
It’s a simple but powerful concept that focuses on increasing returns by:
- Increasing actionable insights through deeper analysis and reflection, or
- Decreasing resources invested through prototyping or failing faster.
Let’s look at each of these in a little more detail.
Actionable insights
Part of discovering insights beyond a particular fail is the ability to “dig deeper” beyond face value to reveal underlying systemic issues. This might involve repeatedly asking “why” in order to move from a first order cause to a root cause.
For example, about four years ago I was the lead solution designer on a half-million dollar project for a major Telco. The job involved arming product managers with a range of new skills and was an opportunity for us to roll out a powerful 70:20:10-inspired solution that encouraged informal and on-the-job learning.
Unfortunately, by the criteria we established and based on the relatively short run of the program … it failed.
Part of the challenge was that the delivery team struggled in rolling it out. For example, they decided our 15-minute webinars, which were designed to scaffold on-the-job experiences, were impractical. Instead they bundled them up into a single two-hour webinar at the beginning of the program!
In this instance, the first order cause was the failure of the delivery team which was an initial focus for our anger and frustration. However, asking “why” several times led to a root cause—we had not taken time to understand the delivery team’s pain, needs, and capability.
Reacting to the first order cause might have led to replacing the delivery team. Instead, responding to the root cause, this experience became the catalyst for us to adopt design thinking to better empathize with and understand key stakeholders. This powerful transformation has meant that the initial failure has more than paid for itself since.
Resources invested
The other side of the equation is reducing the amount of resources invested. The obvious way to do this is to adopt an iterative and prototyping strategy.
I define prototypes as a way to create previews of an end experience. My preference is to prioritize low fidelity prototypes, which might consist of sketches, comic-styled walk throughs, or wireframes.
For example, in an onboarding program, a client wanted to “show, not tell” values around innovation and embracing technology. One suggestion from our co-design process was to use a simple chatbot as part of the first three-month experience.
I was sold on the idea—who doesn’t want to build a chatbot? The co-design group were excited, so we progressed to low fidelity prototyping.
In this instance, we created a three-page comic describing the first day, the first week and the first month of one of our personas—the comic showed her receiving chatbot prompts and check-ins via her phone.
The co-design group used these comics to field test the approach. I briefed them to use the process to better understand our audience rather than defending our idea, so they dutifully asked multiple “why” questions and tried to empathize.
They discovered our audience resented the idea of personal phones being used for work, and a significant group feared that a chatbot would mean managers would play a more hands-off role.
In this way, we were able to fail fast, as we quickly moved on to other ideas that did resonate with the audience.
That raises an important point about prototypes. Rather than simply testing the idea in general, I’ve found it crucial to ask what part of this idea is most likely to fail?
For example, some time ago we were considering rolling out a coaching initiative to support a broader behavioral change program. The obvious potential fail point was that managers would not prioritize the required coaching conversations.
To quickly test this, we generated a list of potential manager tasks, including the coaching tasks we needed them to do, and asked a range of managers to rank them in priority. The required coaching conversations were ranked at the bottom of their lists, which gave us valuable information on how to proceed.
Final points
This is an opening salvo in a much bigger topic and it’s a call to action.
Please let me know about your adventures in failure—the times when you have gained insight by failing to learn. How are you encouraging psychological safety? What strategies do you have to identify failure and dig down to the root cause? And how do you minimise resources invested in failure?
Tell me your stories via the comments here, linkedin, or twitter.
I’ll be sharing more stories and tips on embedding a positive culture around failure in upcoming articles, so stay tuned for future columns.