There are several parts to the answer.
First, if you are not using a formal estimating methodology, that produces artifacts that can be sanity-checked and an estimate that is solely based on the data in the artifacts and on coefficients that are calibrated based on actuals from previous projects, you are h0zed from the beginning.
Books to read: "Software Engineering Economics", by Barry Boehm, "Software Cost Estimation With COCOMO II", same author, and "Controlling Software Projects", by Tom Demarco. These are the classics of software cost and schedule estimation.
If your estimating methodology is not based on actuals that can be measured DIRECTLY from previous projects, you are almost as badly h0zed. Yes, this is a hard slap in the face at "function points": I have been looking for years, and I have NEVER found either an explanation of what a function point is or how to measure it from deliverables. This is also a statement in favor of SLOC (source lines of code) as the basis of the estimation: for small modules, pretty much everyone will agree on 10 SLOC vs. 50 SLOC vs. 500 SLOC, and it is EASY to measure. (Yes, SLOC can be gamed, but, seriously, how long will someone be allowed to game his SLOC by doing interesting source editing before the Coding Standards Police escort him away in handcuffs? This is a cute way of saying that SLOC gaming doesn't happen in the real world.)
What this gives you is the data for a PowerPoint pitch (the only communications medium that managers these days understand) that says: These are the components, these are the estimated sizes of each component, these are the complexities, here are the equations, here are the individual results, and here is the grand total. Your manager then has to argue with the set of components, or the INDIVIDUAL estimated component sizes, or the effort adjustment coefficients, or your adding machine tape.
That's a hard, uphill argument to make, even for a typical modern software manager.
Second, your emphasis in worst-case estimates is well-founded. The problem is that an estimate is not a point in time. It is a probability: you are saying that there is "X" percent probability that we will be finished by "Y" date, where "X" depends on your manager's perception. The corollary is immediately obvious: there is (100 - "X" percent) probability that you will overrun the estimate. A worst-case estimate is aiming for 100% probability. The typical software manager wants a 1% estimate: there's about 1% chance you'll actually make it, and 99% chance you'll blow the schedule, but the small estimate makes him look good and lets him blame you for the blown estimate when reality comes crashing in.
Here I insert a data point, from Boehm's "Software Engineering Economics": Detailed COCOMO, which relied on estimating each component down to the most detailed part, then adding up the estimates, went on the map, big time, when Boehm showed that it hit within 30% of actuals, something over 60% of the time. That was far better than any other estimating methodology available at the time could do.
The key takeaway here is that no estimate will ever be perfect. They all have an inescapable margin of error, and there is always the possibility of a major overrun or underrun. You state this, you say that the estimating methodology is known to give this range of results, and you move on.
Third: If your estimate is to more than two to three significant digits, you are fooling yourself. If your estimate says 455 man-hours, you are saying it isn't 454 and it isn't 456. You can't predict bathroom congestion with that much precision, never mind flu outbreaks.
What this means is you add it all up, you get, say, 3579 man-hours, and then you round it to 3600. You keep the original numbers, so that you can tell your manager how you rounded, and that you rounded BECAUSE you can't predict bathroom breaks or bachelor party hangovers.