Software systems are being delivered to our customers at an ever-increasing rate. How can we keep up with the pace whilst still maintaining the quality of our code? I will demonstrate over a series of three articles how by focusing on the customer throughout our delivery cycle we can deliver reliable working software with confidence, reduce the number of defects, reduce our delivery timescales and ultimately save money. You may think this is nothing new, and that agile development has long since answered this question. However, even in the agile world there are loopholes which allow us to bypass the customer. Leading us to deliver what we think they want, rather than what they were expecting.
PART ONE
Introduction
Writing shippable code is a daunting task. Ask yourself "Will my customers really be happy?" It's impossible to predict with certainty that a complex system with thousands of lines of code is going to be defect free enough for our customers to be happy. It's true, we know it's never defect free so we compromise with defect free enough. Our prediction is often based around the number of lines of code we have touched or written, the number of defects we have found historically per hundred lines of code and the average time it usually takes for us to fix a defect. This allows us to project and track against a find fix curve showing how after 100% code complete we are stabilising our system. We rely on prediction because of the uncertainty that is introduced throughout our development cycle by the circles of information hand-offs and the honest fact that we know we will produce a system which genuinely does have defects.
What if we could remove, dare I say eliminate, the ambiguity and reliance on prediction right from the off? What if we could produce with confidence shippable code? An impossible task? Several techniques used within agile software development (Test Driven Requirements, Test Driven Development, Coding by Assumption for example) can improve our confidence and quality of our code are still rarely used by many organisations.
Going on a Journey
Software systems are becoming more and more complex, though to our customers they have to appear to be becoming simpler and simpler. People are so much more aware of the time they spend immersed in our system especially in a day where every click counts. They demand to have simplicity whilst still interacting under the hood with ever changing technology, standards and protocols. Yet, we are still expected to keep up with their expectations, deliver on time, stay within budget and have a robust defect free system.
It used to be acceptable when applications failed or got stuck every once in a while but those days are long gone. How can we improve the system we produce so that it genuinely does what it is supposed to do without failing?
Imagine I'm going on a journey to somewhere I have never been to before. I'm going to take my family on a day out and we intend to have a lot of fun when we get there. My wife's expectation is that we will leave with enough time so that we can enjoy a full day, whilst still being able to get home at a reasonable hour to get the kids to bed. What do we do? It used to be that we just pick up a map, try to find where we are going to and mark out our route following the major roads. We'd guess as to how long it takes to get there based on our own personal experiences and suggest a set off and return time - job done. Though for me at least I would state that most places ended up taking 2 hours to get to, wherever the destination. In actual fact it takes longer, we end up getting there late, leave early and don't get what we wanted out of the day.
These days we go to our favourite internet mapping system. Key in our start and end points, the time we wish to leave, maybe adjust the route on screen to avoid certain roads and we get a very detailed plan showing exactly how long each stretch of road is, the time it should take to travel along on and our accumulative time spent on the journey so far. No arguments about the route or the time (strangely though 2 hours journey time, but I keep quiet). What's interesting is our behaviour on the journey. My wife tells me to take the next left, which should be in about 3 minutes time and that it's just under 1/2 km away. As I turn left, we both say "that didn't take 3 minutes". We continue doing this throughout the journey until we arrive at our destination on time and have a great day out.
The route plan has set our expectations for each section of the plan. We test these expectations at regular intervals to give us confidence in the plan and to make sure that we are going to reach our destination as expected. Located at the bottom of the route plan is a suggestion that there may be inaccuracies along the way. If we are able to identify them we can provide feedback online when we return home, improving future route plans for other people.
Our instinctive behaviour is to test our assumptions regularly and to make corrective actions when these assumptions are not being met. If the plan isn't clear enough and we keep making mistakes we might abandon the plan and start following our noses in the vain attempt to get there on time. We also like to learn from our mistakes, which leads us to reflect on the journey once we return to see how we can have a better experience next time.
Having the route plan indicate that there might be defects is clear admission from the provider that not only was the plan generated by the information it had at hand, but that there will be allowable defects from time to time. This also changes our behaviour. It suggests to us that defect free enough is ok, and we go along with it. We even feel really good when we find a mistake and can't wait to let the route provider know. We stay loyal to the suggested route just in case we find another one. We are being the good citizen and fixing the world for everyone else.
Developing a complex software system is like going on a journey but with a fundamental difference. We know our start point however our final destination is unclear. We know roughly the direction we should be heading (or at least we think we know). But we constantly make adjustments to our journey as we go.
Traditional software development methodologies tell us to plan, plan, plan, and then when we are done plan some more. By this I mean produce project plans showing how we detail all of our requirements, produce comprehensive design, transfer the design into code, test our system, stabilise our system of defects and then release to our customer. We follow this plan relentlessly until we are either done or realise that we need to re-plan and re-adjust our timescales (the later is the most likely). But it was the detailed journey plan for our day out which was most successful. This goes against the grain for agile software development. Or does it?
The Agile journey
Agile software development teaches us to produce our system by delivering value to the customer through small increments of potentially shippable code. After each iteration we reflect on how we worked and make adjustments to improve (Do, Inspect Adapt). We plan our journey using the product backlog, but we only get to the detail in an iteration for the elements we have identified need to be worked at that time.
For this journey, it's like using a GPS system in our car. We know where we need to get to, but the GPS is constantly recalculating our route and our progress. It knows that if a road takes longer to travel than expected it will adjust our end time. Also if we choose to deviate from our route (like finding somewhere to eat along the way in a town we are passing) it will suggest we make u-turns until it recalculates a new route from our current position. Newer GPS route systems will even keep an eye out for hidden problems (road works, traffic jams and accidents) again recalculating our route if these are encountered. It's constantly testing its own assumptions (as well are we) to ensure that we have a trouble free journey and arrive safely at our destination. This is something we are not very good at doing when developing our software system. For traditional software development models our assumption test is towards the end of the life-cycle while with agile software development we bring that closer to the code being written through automated testing. We incrementally derive our requirements through user stories / use cases then rely on our automated tests to validate if our code meets our assumptions. This is a huge improvement over testing late in the cycle. Better still, how about testing our assumptions of our requirements prior to any system design or code being produced by expressing them as a series of tests. Enter the world of Test Driven Requirements.
Are we there yet?
Let's explore first how we know if we have reached our destination: For a car journey it's quite simple in that we have arrived and stopped the car. In software though it's a different story. We think we are done when all our features are implemented, tested and that we have stabilised our defect find rate to a level we expect would be acceptable by our customers. However, due to the complex nature of the system we have produced and the project timescales, we know that there are certain combinations which have either not been exercised or not exercised enough. We stress test key paths through our system but knowingly omit large sections.
An extremist would state that when there are no defects in our system we are done. Since we know in our heart of hearts that there will be some defects and that our timescales will not permit a completely defect free system we compromise with defect free-enough. We soften the blow of this message by producing find / fix projections. We even set targets against milestones on our project plan. Find / Fix curves are calculated by estimating how many lines of code we will touch or introduce, how many defects per 100 lines of code there will be, how long it generally takes for us to fix a defect and projecting this information on a graph. Our targets are set against a launch criterion of reliability which is rarely zero defects. It's this thinking which allows us as a delivery team to become complacent about bugs in our system. At the end of each iteration we kid ourselves by announcing that we are done. We demonstrate the functionality we have added and hide the fact that there are certain scenarios which will cause problems. We say that we have tested within the iteration when actually a good deal more testing is required. We pat ourselves on the back with a job well done and shelve what's remaining allowing us to focus on the next iteration. We told ourselves that we are done, but we know that we are not completely done.
Changing to the mind set of completely done is not easy. In most circles people think that this only applies to the software developers. Managers find it easy to ask "are we done yet" but fail to realise they are also a part of this message. They should be able to see clearly and be part of the progress without asking and have confidence in what they see, after all our GPS route system constantly informs us with elapsed journey time and estimated arrival time displayed on it's screen all the while through our journey. We never stop, get out our calculator and double check the numbers, we trust what we see. The information radiates out to us and we are kept warm in the knowledge that all is fine.
Knowing that we are completely done requires one key piece of information, our criteria for a successful outcome. If we do not know this then we cannot even claim that we are done. This doesn't just mean that there are no defects in the code, or that we haven't broken existing features, it means that the features are also fully satisfying the needs of the customer. Even if we do express our requirements as user stories or use cases we often fail to provide acceptance criteria. If we did then we would know for each requirement whether we were able to achieve a successful outcome. Continuous automated testing against these outcomes would then start to show us how far we had come, and how far there was to go just like the GPS route system without the reliance on prediction models.
Part two of this article will focus on our criteria for a successful outcome, how we express this in the context of software and will uncover the loophole many agile teams overlook. Part three will then demonstrate how this can overcome and transferred into our code so that we can deliver shippable code.
About the Author