So, I've done this through 4 companies now as a manager of medium sized (5-20 people) teams. And I think there's different tools for a couple of different jobs:
Retrospectives (Agile) & Lessons Learned
Good for stuff that is somewhat like a scrum - a way for a team to look back reflectively. Good for finding the stuff that doesn't fit into a single ticket, and that can be more around communication. I find it a lot harder to get much out of this when we're talking about a team that is too big to scrum together (like a department sized group) - because the low-formality, discussion elements really need a team size that allows fairly equal person to person communication. Doing this as a group of 20+ you end up with deputized speakers (officially or unofficially) being the only contributors.
Agile has clear processes and techniques for Retrospectives.
Other times companies that may not use Agile will do a "Lessons learned" meeting using their own concoction of procedures but the good formats generally seek to capture all ideas, avoid blame, and have a separation between brainstorming which is non-judgmental, and prioritizing/acting - which seeks to do the work with the most bang for the buck.
Both processes can be biased by the participants personal experiences. For example, something that is super annoying may not actually be the biggest productivity killer - it's just the squeakiest wheel. And folks with big personalities can sway others if the moderator isn't good at counterbalancing.
Single Incident/Big Impact
Post mortems (and many of the great write ups here already) are fabulous for single incident cases. There's a lot of work (as others said) to drive this away from being a blame game and into a useful learning exercise. That is the risk, though, with diving deep - you need to make sure that a single bad situation is not the only reflection of an individual or group's performance. Performance management has to be quite separate from this exercise.
The draw back is that a really good post mortem takes real time. And a superficial post mortem won't be worth the time you put into it - in some ways, bad post-mortem research can be worse than none at all.
So - you end up needing a bar for "what's so impactful we should do a post mortem?". Each business is different on this, but my advice would be to ground it in the business strategy, and then do you best to find unambiguous metrics for that. (as opposed to what situation was so bad that the CEO was embarassed/woken up in the middle of the night/etc ... may be worth doing that one too... but it shouldn't be the only one).
Post mortems are ... post - the incident resolution process is generally separate. It helps to judge what deserved a post mortem if you have an incident resolution tracking process that is relatively public - so that leaders can see how other leaders are handling this.
Death by 1,000 paper cuts
I've also been in situations where my technical issues were not delivered in big-bangs that would be assisted by post mortems, but in many small, time draining, soul sucking issues. This can be tough, as you never really get the energy to deal with the root causes and all of it seems trivial.
A that point, drawing out data is a common useful technique. I would not recommend the full on process (too burdensome) - but CMMI/CMI can be useful. It's all about tracking and analyzing data for process improvements. Big, huge companies use it, and the downside is that it's process for your process, and as such it can be an impediment to making radical change. But it's got some good techniques for data analysis in there. Steal those and discard the rest before any of it sticks to you.
What I learned most from CMMI is that you can look at these 1,000 paper cut issues and form some interesting conclusions about them by categorizing the data and looking at it in bulk. The key is that you have to have data tracking that is consistent and accurate enough for the judgements you are trying to make. For example - how are people tracking time spent solving the issue? Will that matter? If an issue is attributed to a component, is the attribution accurate? Do you know root cause? Do you have categories for it? Is everyone using these categories the same (it's usually that a human has to enter this data...).
This becomes the realm of statistics - but even a determined manager with Excel can sometimes pull out some useful details. The other trick is - know how to use statistics, and/or don't use stats you don't understand.
Back in my days of using an actual process as opposed to a DIY solution, my CMMI expert was a stats goddess. She had tools, skills and the power to communicate - which made the whole thing work well. A huge company may make that kind of investment... a smaller/less organized company may not.
if
statements was causing 90% of their bugs. I think OP wants to learn ways of discovering those kind of bug root causes.