A big part of the problem is that management often isn't interested in honest estimates.
You can certainly find better practices in some organizations, but in many organizations, a certain team gets tasked with completing a certain assignment by an arbitrary deadline and no attempt is made to reconcile the triple constraints with reality.
In a context like that, developers may not be asked to make estimates, and if they do make estimates they don't believe that they'll be taken seriously. So they never develop the chance to learn how to estimate.
The accuracy of estimation also depends on the context.
If, for some reason, I had to fix a difficult-to-reproduce bug in a large system written in COBOL that runs on a mainframe, I'd have very little idea of how long it would take me to learn COBOL, understand the codebase, figure out the tools, and track the problem down. A COBOL jock would obviously do better.
On the other hand, I've worked on greenfield well-specified business CRUD apps based on a correctly aligned framework where I could estimate that something would take 21.25 hours and I'd really get it done in 19.75 or 22.50.
I've met developers who will absolutely refuse to estimate anything but I think more often developers don't believe their estimates will be listened to.
The good news for both developers and managers is that estimation is a learnable skill. If you get in the habit of making estimates and testing them, you'll get better amazingly fast. See the classic book
> The good news for both developers and managers is that estimation is a learnable skill. If you get in the habit of making estimates and testing them, you'll get better amazingly fast.
To an extent, though, the better you are at programming, the worse you'll be at estimating.
A bad programmer, given a one-week task similar to one he's done before (when it took him two and a half weeks), will estimate it at two weeks and do it in two weeks, plus or minus a bit.
A good programmer given the same task will try to apply the previous solution, or use a library that solves the problem, and usually get it done in two to four hours. But some of the time that won't work, and then they will spend three days doing what would have taken the bad programmer two weeks, and a fourth day generalizing it so that when they have to do a similar task the third time, they can do it in two to four hours.
The good programmer's estimate will, therefore, frequently be off by a factor of ten or more.
In short, the better you are at automating what you've previously done, the more time you spend doing things you haven't done before, and so the more uncertain your estimates are.
The central limit theorem says that if you add up enough independent random variables from whatever distribution, their sum will eventually start to look like a Gaussian normal distribution. Unfortunately, this isn't as useful as you might think for software project estimation, both because different tasks aren't independent and because when the underlying distribution is heavy-tailed (e.g. lognormal, exponential, etc.) "enough variables" can be much larger than the number of tasks in your project: in the hundreds or thousands.
> The good news for both developers and managers is that estimation is a learnable skill. If you get in the habit of making estimates and testing them, you'll get better amazingly fast. See the classic book
It's not hard to learn to make reasonable estimates. Even for unknown problems, I can often say that X has usually taken about Y time in the past and be right most of the time.
The real problem comes when the estimate you give is not the one management wants to hear.
Or when you're tasked to do something you've never done before, and your initial thoughts of "I bet y is like x" turn out to be terribly wrong.
Something so seemingly trivial as a different API can make two similar tasks take vastly different amounts of time, especially if you're approaching the second problem with the mindset induced by the first API.
You can certainly find better practices in some organizations, but in many organizations, a certain team gets tasked with completing a certain assignment by an arbitrary deadline and no attempt is made to reconcile the triple constraints with reality.
In a context like that, developers may not be asked to make estimates, and if they do make estimates they don't believe that they'll be taken seriously. So they never develop the chance to learn how to estimate.
The accuracy of estimation also depends on the context.
If, for some reason, I had to fix a difficult-to-reproduce bug in a large system written in COBOL that runs on a mainframe, I'd have very little idea of how long it would take me to learn COBOL, understand the codebase, figure out the tools, and track the problem down. A COBOL jock would obviously do better.
On the other hand, I've worked on greenfield well-specified business CRUD apps based on a correctly aligned framework where I could estimate that something would take 21.25 hours and I'd really get it done in 19.75 or 22.50.
I've met developers who will absolutely refuse to estimate anything but I think more often developers don't believe their estimates will be listened to.
The good news for both developers and managers is that estimation is a learnable skill. If you get in the habit of making estimates and testing them, you'll get better amazingly fast. See the classic book
http://www.amazon.com/Software-Estimation-Demystifying-Pract...