A bad estimate is better than no estimates at all. There are plenty of empirical studies that prove that. Some computer scientists have a hard time dealing with non mathematical proofs. They have a huge blind spot for things like empirical case studies, qualitative studies, etc. Those are tools that people in other fields use when mathematical models fall short. There are not a lot of useful mathematical models you can use to model software development. So, understanding that software development is an inherently social process kind of points you in the direction of where to look for alternatives to that.
Humans are bad at estimating stuff. Software engineers doubly so. But even a bad estimate gives you some handle on how easy or hard something is to do. And you can break things down to improve the quality of the estimates. And then the rest is just using statistics to model the margins of error. Economists are very good at working with this sort of stuff. I always recommend people look at Don Reinertsen's work on lean development. He's using a lot of economical reasoning and notions such as cost of delay, amount of work in progress, and indeed cost estimates. Those being bad is less of an issue if you know they are only off by 2x vs. 10x. A 2x margin of error gives you some handle on the situation. A 10x uncertainty margin means you need to use some tricks to improve the quality. There are ways to do that. 2x is not great but is better than ignoring estimates.
> But even a bad estimate gives you some handle on how easy or hard something is to do
Can’t agree with this at all. A bad estimate is by definition bad. How does it give you a handle on anything if it’s unreliable? It’s like saying bad medicine is better than no medicine.
And when you talk about breaking an estimating problem into smaller estimates, you’re now firmly in the domain of actually doing the work. An economist can’t take a software problem and decompose it into tractable solutions. Even if they could - good luck getting the customer to pay for that.
That’s why I say estimating is an economic problem. The cost of accurate estimates asymptotically approaches the cost of the implementation. Nobody is willing to pay for accurate estimates, and anyway doing this doesn’t make sense.
Unfortunately there are very few levers to pull in terms of software planning. Time based project management is way over emphasised. This is why approaches like lean (MVP, CD) and agile exist in the first place.
That's the problem. You have an opinion but no empirical evidence and you think your opinion is pretty good. There's plenty of material out there that is a bit more well reasoned that disagrees with your opinion. For example Don Reinertsen's books.
Most Agile estimates are pretty bad. But that's OK because it's still better than just winging it. One of the realities of working with real companies is that they have things like dead lines. Programmers don't like them. The don't believe in them. They are bad at delivering stuff on them. But they still exist. The reason agile/lean/scrum/etc. exist is to beat some sense into programmers completely and utterly failing to deliver anything on a predictable schedule. Bad estimates are the foundation of these processes. Where bad is better than no estimates and the rest of the process is about mitigating the consequences of the estimates being low quality.
> That's the problem. You have an opinion but no empirical evidence and you think your opinion is pretty good. There's plenty of material out there that is a bit more well reasoned that disagrees with your opinion. For example Don Reinertsen's books.
Wait - there is a large body of work that is better reasoned than a quick post dashed I off in haste before dinner?
> One of the realities of working with real companies is that they have things like dead lines
Using condescending terms like "real companies" is unnecessary. You know nothing at all about me or how I've come to my conclusions; you just make arrogant assumptions that you know better about this than I do. I'm open to being wrong and learning new things, but do you really think you have the Silver Bullet?
My experience over a large number of projects suggests that the whole premise that most commercial software project problems can be solved using technical, mathematical or scientific means is wrong. My opinion - which aligns with the results of large-scale studies such as the Standish Group CHAOS reports, the PMI Pulse reports and more - is that the problems with commercial software projects appear to be almost entirely social and organisational. Projects are too ambitious, too complex, and are executed poorly by inexperienced people. No amount of time spent on estimation is going to fix that.
I am unfamiliar with Reinertsen's work, but I think it's quite interesting that economics is not exactly a science either.
A bad estimate gives you a handle on how easy or hard your engineers think that something is. Experience shows that this has little correlation with how easy or hard something really is.
That's a good example of a qualitative statement that would be a lot more credible with a lot of empirical evidence backing that up. "Experience shows" is not enough here. Raises more questions than that it solves anything here really.
Basically uncertainty is the key concept here. And cost, complexity, and time estimates are ways to get a grip on such uncertainty. Unlike software engineers, economists and sociologists are used to reasoning about and modeling uncertain situations. Going all hand wavy and winging it is typically not what they advertise.
And I'd argue that their bad estimate includes their knowledge of how difficult the task is. Unless you're completely broken, you're not going to under-estimate something wildly if you are unsure of what you need to do or have no idea how to do it.
Sure, I can give estimates that don’t underestimate and account for eventualities. It’s just that my error bars are large enough that in that case I have to estimate several months for tasks that I believe should take a week.
Humans are bad at estimating stuff. Software engineers doubly so. But even a bad estimate gives you some handle on how easy or hard something is to do. And you can break things down to improve the quality of the estimates. And then the rest is just using statistics to model the margins of error. Economists are very good at working with this sort of stuff. I always recommend people look at Don Reinertsen's work on lean development. He's using a lot of economical reasoning and notions such as cost of delay, amount of work in progress, and indeed cost estimates. Those being bad is less of an issue if you know they are only off by 2x vs. 10x. A 2x margin of error gives you some handle on the situation. A 10x uncertainty margin means you need to use some tricks to improve the quality. There are ways to do that. 2x is not great but is better than ignoring estimates.