I have to admit, I’m pretty terrible at giving estimates for how long a task is going to take me. You would have thought I’d have this mastered by now, yet after many years in the industry I still tend to underestimate how long something will take. Why should this be?
Here is a list of reasons which I’ve come up with which pertain to me personally but I’d guess are probably quite common:
- The irresistible compulsion to make myself appear like a more competent coder than I really am. If it’s a sign of a talented coder to be able to do tasks more quickly, then giving shorter off the cuff estimates makes you *appear* like a more talented coder, right? Obviously this is faulty thinking but this sort of logic does play some part in my estimates. See next point.
- Second guessing how long my superiors/peers expect it *should* take me. Again this shouldn’t really enter the equation but does. See previous point.
- Not fully accounting for the number of hours in a work day I can actually devote toward a task. That is, underestimating the time which is lost to meetings, random bugs which artists might be having, impromptu debugging sessions with others, miscellaneous conversations in the kitchen when going to get a cup of coffee, older tasks which aren’t quite “done” (we all know tasks are never quite done, right?). The list goes on… Time lost to waiting for assets to build or a bad check-in which causes code not to work after a sync, etc. You get the idea.
- Am I answering a slightly different question than the person doing the scheduling? What they’re usually asking is how long will it take before I can finish the task at hand so that I can move onto the next one. What I’m usually answering is how long it will take me to check in the first semi-usable working version. Considerations such as bug fixing, cross platform porting, and user friendliness aren’t usually accounted for sufficiently in my estimate.
- Many tasks I tend to work on require research and experimentation. This itself could potentially go on forever. I generally want to do enough research that I have some idea about the pros and cons of the various approaches available to get the task done to the quality required. I find that I usually spend more time on research than I initially budget for. In the age of the internet there is usually more written about any given topic than I can consume. Also it’s easy to get lost in the academic paper wormhole. They can be time consuming to parse, and often require background knowledge which requires recursively reading other academic papers.
- My time estimates assume that the initial approach I take works out. Sometimes it doesn’t.
- I do a lot of my algorithmic prototyping / proof of concept work in a sandbox. Lightning fast iteration times, significantly less plumbing complexity and being intimately familiar with my test harness code adds up to me being about 5-10x more productive than when iterating on real production code. However, I tend to underestimate the extra complexities, and therefore the time it takes, involved in converting a test harness proof of concept into full fledged production code.
- Sometimes you don’t realize what you don’t know until you actually dive deeper into a task. Uh-oh, there goes my estimate again.
- Work style and personality type. I have perfectionist tendencies, as I’m sure many of us here have. I usually fall into the camp of “anything worth doing is worth doing right”. I’ve learnt over the years that, while this is true a lot of the time, it’s simply not true all the time. Some issues call for a non-elegant quick fix rather than the more time consuming “correct” fix. Sometimes the time spent understanding why a bug is happening even if you have a fix is simply time which could be better spent elsewhere. Sometimes it’s just a bad idea to reinvent the wheel. My natural tendency however is to spend more time “crafting code” than is strictly neccessary.
- From time to time, a task isn’t simply a matter of implementing the task. You find that what you thought was going to be quick and simple, would be quick and simple, if your assumptions about the underlying code held true. Instead you find yourself working on top of layers of patched code and/or missing/broken functionality. Potentially the existing code is in serious need of a rewrite or at least some heavy refactoring. As a result, the starting date of the real work on the task may be hours/days/weeks later than you planned for, pushing out the end date that much longer.
- Enthusiasm level for the task at hand. When giving estimates I usually assume I can maintain a pretty high enthusiasm level. The more a task drags on or the more tedious it turns out to be, the more enthusiasm wanes and less productive I tend to get. I’d guess there is maybe a 8x differential in productivity for me between very enthusiastic vs. very unenthusiastic. The flipside of this however is that if I’m really enthusiastic about a task, I tend to make a lot of progress more very rapidly.
So all of this together maybe means that the initial task estimates I give are maybe 2x too optimistic. On the other hand giving that realistic 2x longer estimate from the outset can seem like I’m dragging my heals on a task.
Given a set amount of time, and a task, people will tend to fill all the time available getting the task done (I am sure there’s a name for that law, but can’t remember it). If there’s any grain of truth in that statement, and I think there is, then giving the shorter (original) task estimate can actually be quite beneficial. It changes my “mental deadline”, and if my mental deadline is closer than the actual amount of time it’ll actually take me to finish up a task, that gives me a strong motivating factor to “get it done”. Let’s call this a “best case estimate” (#tongueincheek) – I usually give best case estimates when asked how long a task will take…
[This entry is cross-posted on my personal blog]