If you are agile, you might spend some time estimating. If you’re using Scrum, you estimate what you can do in an iteration so you can meet your “commitment.” But estimation is a problem for many agile projects. The larger the effort, the more difficult it is to estimate. You can’t depend on ideal days. Do your estimates provide value? To whom?
If you are agile, you might spend some time estimating. If you’re using Scrum, you estimate what you can do in an iteration so you can meet your “commitment.” You might spend time breaking stories down in flow to achieve a particular cycle time. You might spike for a day so the team recognizes what this feature will really take in time or budget.
Do your estimates provide value? To whom?
Project Estimation History
We estimated in waterfall or phase-gate projects so we would have options at the next milestone. It was too difficult to see all the way to the end of a waterfall project, but it wasn’t so hard to see to the end of the current phase. At the end of this phase, we re-estimated the next phase or the rest of the project. The organization had the ability to cancel the project if they didn’t like the estimate.
If you work in a timebox (iterations), you don’t quite know how much you can do in that timebox. The idea behind a timebox is that you complete your work, by definition, at the end of the timebox and then stop.
Estimation Problems
I have seen teams estimate entire product backlogs, regardless of the ability to change the backlog in agile. I have seen teams spend an entire day estimating what they can and cannot do in a two-week timebox (never mind what they do for four-week timeboxes).
Because agile promotes change, I can understand a gross estimate for a product backlog. “If we do everything here, it could take us six to nine months. We’ll know more after the first couple of iterations.” Or, you could say, “Based on what we see here and the unknowns we have, we have a 50 percent confidence in finishing this backlog in six months. We have an 80 percent confidence in eight months. We have a 90 percent confidence in 9 months.” When you provide confidence percentages, your estimate recipients understand more about your uncertainty.
I don’t understand why teams spend more than a couple of hours estimating stories for an iteration, or why teams spend more than that estimating a product backlog.
Our estimates are not good for large chunks, especially if we haven’t done anything like this before. Our estimates are much better for small chunks, such as a one-day story, and even those estimates might be off when we discover problems.
Why Do You Estimate?
Let’s return for a moment to the purpose of estimation. You estimate for one of these reasons:
- To provide an order-of-magnitude size/cost/date about the project, so we have a rough idea of the size/cost/date for planning purposes. An order-of-magnitude size means we want to invest just enough time in the estimate that we believe in the accuracy of it for planning purposes.
- To provide a “commitment” for an iteration
- To know when we will be done, because we are close
- To allocate money or teams of people for some amount of time
- To know whom to blame
Do you estimate for a different reason?
Let’s take these reasons in order.
Provide an Order-of-Magnitude Estimate
It makes sense to provide an order-of-magnitude estimate, especially for relatively small projects. I have been in the position of having to choose which project will provide us revenue sooner rather than later. Some of you also work for small organizations, where “When can we get revenue?” is a good question.
However, I see organizations try to decide on large programs based on the estimate. The larger the effort, the more difficult it is to estimate. You can’t depend on ideal days. You have to know that no one is multitasking. You need to have experience with the technology. Otherwise, your estimate is off, and the only question is by how much.
That estimate is not as useful as a small-project gross estimate. The way to make program estimates useful is to iterate on them. Now you’re back to spending a lot of estimation time.
Provide an Iteration Commitment
Have you seen teams miss their iteration “commitment” over and over again? I have. Often, the teams decide they need to learn to estimate better—or some manager decides the team needs to estimate better.
The team might need to estimate better. And the product owner needs to learn to write small stories that are only a team-day long or shorter.
Often, both the product owner and teams have trouble with this idea. We, as an industry, find it difficult to think, “How little can we do and have value?” instead of “What will it take to finish this feature set?”
Know When We Will Be Done
You might think you are close to done on a project. In that case, maybe it makes sense to estimate how much more you will have to do to meet the release criteria.
You can provide a detailed estimate for about three to four weeks of work. However, in my experience, providing a detailed estimate for much more than that is an exercise in frustration. The stories farther out tend to not be too detailed, and you encounter problems as you do for generating the gross estimate. The stories are just too big the farther out you look.
Allocate Money or People
Sometimes, the value decision in the project portfolio evaluation has a cost component—especially if you need to decide between a few projects, some of which are shorter in duration than others. I still prefer to make the value decision without using cost, but in smaller organizations, cost might be part of the value equation.
Blame Someone
If you provide an estimate as a single point in time, you set yourself up for the blame game.
The way to avoid this is to create three-point estimates, estimates with a confidence percentage, or estimates that spiral in on a date. Those estimates create the expectation that you will iterate on your estimate, and better it as you proceed. An agile approach to estimation, isn’t it?
Estimate Better
Estimation is a problem for many agile projects. You might want to try asking what kind of value your estimations are creating and considering whether you need to reevaluate your process.
User Comments
There are other reasons we estimate but I'll start with one you mention - commitment. Yes, we estimate to determine what we can commit to, but we don't estimate to capacity, instead in my organization we apply a "focus factor" to help us offset the cost of the unknown. Some would simply call this a buffer, but ultimately we know distractions will occur and unknowns will surface that lead to added tasks we didn't previously consider during planning.
In the end we'll be off, but we're okay with that. Over time the variation in our initial t-shirt sized estimates, our detailed estimates, and our actual esitmates gives us the "math" we need to better forecast and protect against the risk of over-commitment.
We also task to learn. In our organization we're dealing with many new COTS applications that our development teams are working to build up knowledge around. Tasking helps teams learn from one another as to what each team member thinks it will take to deliver a particular user story or solve a defect. This gives the team an opportunity to collaborate over a plan of attack and ultimately, ensure they share the same views of how whatever is built - new feature or resolved defect - that they all agree in concept as to how it will be tested. A form of test driven development I suppose.
Certainly these learnings could emerge throughout the sprint as the team tackles each item, one-by-one, but often as much as we do to encourage the team to collaborate and pair, some splintering occurs. As such, tasking serves as a forcing function to help ensure this type of initial collaboration occurs. Ultimately though we do want it to be as lightweight as possible with teams avoiding painful contract negotiaitions and instead focusing on coming up with an initial plan of attack.
The commitment is important in my organization as it helps our customers understand what is coming next and in-turn align their resources to support activity such as change management. With sound forecasting we've seen teams reaching upwards of 97% predictability. So all things considered, tasking can be a good thing. :)
Clint, your "tasking" sounds like a spike to me. It makes sense.
If you see value in your estimation, keep doing it! I saw a number of teams where there was no value in their estimation--they estimated, but between multitasking, the delay in starting, and the changes in the roadmap and the backlog, the estimates were no longer valid.
Johanna- What you list out is probably a lot of reasons why we have the #noestimates movement in the first place.
For me, I think it's a matter of scale. Estimates, at the team level, have some value (especially for newer teams). It's trying to leverage all those team estimates up into something larger that gets us into trouble. I think we need better tools for prediction based on real results and not artificial concepts like estimates and team velocity. Your thoughts?
Joel, my opinion has evolved from the time I wrote this. I no longer believe in "real" estimation at the team level for *agile* teams. If you're using a different approach, estimation might have value because the team delivers so infrequently.
I now believe in counting stories. Not tasks, stories, something that provides value to an end user. My experience with agile teams is that when they count stories and manage their WIP (work in progress), they can commit to some amount of work in some time period, if they wish to work in iterations. The more they count stories and report on the stories they complete, the more likely they are to deliver more stories. It's the old what gets measured issue.
Team velocity if we count stories can make sense. Makes no sense once we create tasks. This has implications for work much more than two or three iterations out. I have no problem with a roadmap of feature sets (note my wording there. I'm not talking about epics or themes because everyone defines them differently. I talk about features (stories) and feature sets). Once everyone realizes we might not need an entire feature set, we start to discuss what a real MVP or MVE (experiment) could be. We release ourselves from the tyranny of a large commitment and start to ask "how little can we do and provide value?" This is where agile approaches shine.
I have a ton of this in my newest book, Create Your Successful Agile Project, and a slideshare about rolling wave planning, https://www.slideshare.net/johannarothman/think-big-plan-small-how-to-us.... (Yes my next book will be a product owner book. I now, you are so surprised :-)