When you're done with a project, you record what went well and should be repeated, and what went wrong and should be avoided. But do you ever actually revisit these findings on future projects? If not, you're passing up crucial knowledge. Martin Ivison describes how his organization created a process to learn from past experiences.
Imagine you’ve just executed a large project. You’ve designed, built, and tested for months or years at money-rendering, people-burning cost. Finally, you push this thing out of the door and into the world.
Then, as you break out the champagne and bonus checks, you sit your team down and ask for the scoop. What happened? While the experience is still fresh in their minds, you want to know what went well and should be repeated, and what went wrong and should be avoided in the future. Everybody is keen to share, and you collect the data, document it, store it, pat everybody on the back one last time, and then … well, you move on.
But are you ever going back?
At my present organization, we thought about this as we were redesigning a knowledge repository and came upon reams of items documenting these so-called “lessons learned.” When we blew off the dust and opened them up, they were clearly full of valuable, money-saving ideas. Yet, no one seemed to have used them after the initial project.
We had executed similar projects later, but they mostly seemed to have started on clean slates, without being warned by these ghosts of projects past. Why not? What was it that made these “lesson learned” we collected in our retrospectives at test closure not stick?
We knew for a fact that individual team members learned from their experiences. They figured out what worked and what didn’t, and this knowledge was available to them the next time around. We also knew that more often than not, those team members were not available to new, similar projects that could have benefitted from that experience.
In other words, in an age of dynamic workforces, contracting, and outsourcing, chances were that individual learning—or even team learning, as in agile—was not available when it was needed. Clearly, what we needed instead was organizational learning.
But how does an organization learn? How does it best make use of its long-term memory?
Understanding Human Memory
To understand this, let’s start with the way it works in individuals.
People have two types of long-term memory: the so-called explicit (or declarative) memory, and the implicit memory. Explicit memory consists of all your learned facts, knowledge, and personal narrative. This is all stored in the same part of your brain, the medial temporal lobe— a central library, of sorts. Implicit memory, on the other hand, stores your spectrum of behavioral learning, such as skills, habits, and emotional responses, all the way down to your reflexes and muscle memory. This is not stored in a central space, but it is woven into the fabric of many parts of your nervous system.
So, when we think about organizational learning, we need to think about changes in behavior, not acquisition of facts or history. Think riding a bicycle or learning to swim, not reciting a poem. So, instead of an organization’s explicit memory—its knowledge base or document repositories—we will need to identify its implicit memory, and make our lessons stick there.
At my organization, we realized what needed to change with every session, every observation, and every suggestion. Our lessons learned would only stick if they managed to change our cogs and gears. The equivalent of an individual’s skills, habits, and reflexes are our organizational capabilities, processes, and systems—and, on a higher level, its organizational culture, or emotional responses.
Getting Started with Real Organizational Learning
When we got to work, it was not as hard as it sounded. We did two things. First, we created a feedback mechanism very much like our defect reporting (in fact, we used our bug tracking system to do it). This “suggest an improvement” tracker is open to the entire organization, and, like defects, suggestions are regularly triaged and evaluated. If accepted, the improvements are included in quarterly updates to our “stuff”—meaning our tools, standards, methods, templates, training materials, service definitions, vendor requirements, you name it.
Second, we installed a mandatory activity in our test closure process to gather lessons learned and turn them into suggested improvements. It’s a creative process. The marching orders are, if you’ve learned what works and what doesn’t, figure out what needs to change in the framework to support or force that change the next time around. Then log that suggestion.
After a little over a year of making these changes, we have gathered—and acted on—a good two hundred of these suggestions. That is quantifiable, persistent learning.
Here’s an example: A team working on a project noticed that tracking technical fixes for defects was easy, whereas nontechnical solutions (e.g., changes to business processes or training materials) created diffuse accountabilities and often got lost somewhere. This happened especially in projects that could least afford it—the ones squeezed by risk and time. The suggestion that came out of it was to make small changes to the wording and flow in the defect tracking system to accommodate nontechnical resolutions. This way, workarounds would be tracked, triaged, resolved, and verified in the same high-visibility way as code fixes.
Of course, we didn’t do away with our knowledge repositories. They are still there, and there is value in them. From time to time, we still gather around the fire and tell old tales of projects won and lost and of defects of tremendous magnitude found and fixed. But it is no longer our main way of remembering. Our organization now remembers in its core, as a learned behavior.
User Comments
well explained
Thank you. It's good idea to have DB tracking for action items from 'lessons learned'. I liked it.