"software is done when its done, no sooner, no later."
This is true, but for each task that your developers begin working on, does everybody in your organisation understand the Definition of Done for each task?
It seems that one of your biggest issues is Estimation, but developers can only provide a realistic estimate when they have an unambiguous and clearly-specified 'definition of done'. (Which includes company process issues - e.g. user documentation, work packages on a formal release, etc.)
It's not surprising that over-estimation is causing a problem, given that most developers see that estimating the time required to complete a task is the most difficult they're be asked.
However, most developers tend to have a reasonable (albeit optimistic) handle on the amount of effort they are able to put in, for a given period of time.
The problem often is that developers struggle to create a relationship between a task and the total amount of effort required when they're dealing with incomplete information - particularly if they are pressured to come up with all the answers up-front to a really huge task.
This naturally leads to time estimates becoming disconnected from reality, and they lose sight of things like the build process and user documentation.
The disconnect tends to start at the very beginning when the task is described; and it's usually compounded by a non-technical person drawing up a list of requirements without having any clue of the amount of effort needed.
Sometimes people in senior management specify tasks and completely ignore company process issues; it's not uncommon for senior management to think that things like defining tests, or creating a formal released build, or updating a user document just happens magically with no time or effort. required.
Sometimes projects fail before a developer has even written a line of code because somebody, somewhere is not doing their job correctly.
If the development team are not involved in agreeing requirements or capturing acceptance criteria, then that's a failure of management - because it means that someone who has an insufficient understanding of the code and the technical issues has committed the business to an incomplete set of requirements, and left the project open to misinterpretation, scope creep, gold plating, etc.
If the development team are involved in capturing and agreeing requirements, then it could be a failing of the team, who are responsible for clarifying the details (and the acceptance criteria - i.e. "What does the deliverable look like? when is it done?"). The development team are also responsible for saying NO when there are other blocking issues in the way, or if a requirement is just unrealistic.
So if the developers are involved in the capture of requirements:
- Does the team have an opportunity to sit down with the product manager to clarify the requirements/definition of done?
- Does the team ask sufficient questions to clarify implied/assumed requirements? How often are those questions answered satisfactorily?
- Does the team demand Acceptance criteria (definition of done) before providing an estimate?
- How well are Acceptance criteria usually captured for each task? Is it a vague document with sparse detail or does it describe tangible functionality, and wording that a developer could unambiguously translate into a test?
The chances are that the productivity of your team is not an issue; your team is probably putting in all the effort they are able to put in with regards to development. Your real issues could be one or more of the following:
- Incomplete and vague requirements.
- Requirements/tasks which are just too big in the first place.
- Poor communication between the development team and upper management.
- A lack of clearly-defined acceptance criteria before the tasks are handed to the team.
- Incomplete or vague/ambiguous specification of acceptance tests. (i.e. Definition of Done)
- Insufficient time allocated to defining/agreeing acceptance criteria.
- Developers didn't consider time to test existing baseline code or fix existing bugs
- Developers did test the existing baseline code but did not raise the bugs as Blocking Issues before providing estimates on the requirements
- Management saw the issues/bugs and decided that bugs do not need to be fixed before writing new code.
- Developers are under pressure to account for 100% of their time, even though possibly 20% (or some similar number) of their time is probably taken up by meetings, distractions, emails, etc.
- Estimates are agreed at face-value and nobody adjusts room for error or contingency (e.g. "We decided this should take 5 days, so we'll expect it done in 8.").
- Estimates are treated by everybody (developers and management) as single numbers instead of a realistic "range" numbers - i.e.
- Best case estimate
- Realistic estimate
- Worst-case estimate
... the list could go on a lot longer than that.
You need to do some "fact finding" and figure out exactly why the estimates are consistently disconnected from reality. Is the existing baseline software bad? Does it lack unit test coverage? Do your developers avoid communication with management? Do management avoid communication with developers? Is there a disconnect between the management expectations and developer expectations when it comes to "Definition of Done"?