TL;DR
I agree, and I think our rule books do too. I'm glad we have that FAQ post. I'm glad we're "teaching to "teach askers how to fish", but a question that asks for a particular fish is not the same as a question asking how to catch any fish. The deeper problem we have to figure out how best to tackle is that we're getting tons of non-generic debugging-problem questions (that don't meet the "minimal" and "example" criteriaqualifications of an MRE) instead of generic, generally useful questions (that meet all the criteriaqualifications of an MRE). You're probably right that there are better solutions. I'll try to give an analysis of the current situation with our minimal reproducible example guidance. I'll, list some existing SE projects to improve the system, and then touch on other valid response options.
(because I have to have some SEDE data) Disclaimer: I am new to SQL, and am a human being. I might have made mistakes writing the query. PleasePlease correct me if you find a mistake!
Here's the deeper problem with our current scenario that I think is motivating the use of this technique of dup-closing to an FAQ page: We're getting treated as a debugging help desk service, and the library which was intended to be filled with generic questions that can help many people is being filled with non-generic questions that are likely only to help the original asker and perhaps a small handful of people.
If you look at the 4th and 5th tables of my first SEDE query, you'll see that for posts that get a comment linking to the debugger FAQ post (Ie. posts where someone makes a "friendly link" to it instead of dup-close-voting/flagging), you'll see that 55.07% are currently deleted, and 10.68% are closed but not deleted, and that only roughly 10% of them have positive score. They're generally not good questions: they don't meet some combination of our guidelines for writing good-question criteriaquestions.
Just for some bonus SEDE fun, here's a graph of the current status of posts which are linked to the debugger FAQ post either in a comment, or closed as a duplicate, grouped by the creation date of the post. Note the funny (and sad) onslaught of poor questions that we get during our "eternal septembers". Here's a view of the top FAQ being used as dup targets over time. It
Looking at that data, it almost makes you think "no wonder / thank goodness people are hammering and deleting these". Using a dup hammer is way faster than getting three close-votes, which may even require probing for enough information to know which close reason to use. Are there other solutions than that? Yeah.
We can (and I think the system needs to do a better job to) tell people how to ask a good generic question, so that they know what questions aren't a good fit, and how to do a good job of asking questions that are.
We already have an instruction manual for how to write good generic questions about debugging-related problems: the help center's page on how to create a minimal, reproducible example.
We have a close reason for questions that don't meet this criteria: "needs debugging details".
So what gives?
The situation with MREs and "needs debugging details"
TL;DR: Our system doesn't automate/force new users to learn how to ask a good first (or non-first) question, and we as a community of answerers/reviewers aren't being very strict about it either.
We answerers and reviewers aren't being strict enough on the "minimal" and "example" criteria in "MRE"
What I can say confidently, is that the MRE page and the "needs debugging details" close reason state the the question should present the shortest code possible that reproduces the issue, which in my experience, very very very few askers actually do (my judgement here is very possibly flawed or subjective though). So where that close reason applies, it can be used (while using your judgement, and being welcoming, friendly, and helpfulamong other things).
The MRE help page's guidance on making an example minimal states:
The more code there is to go through, the less likely people can find your problem. Streamline your example in one of two ways:
- Restart from scratch. Create a new program, adding in only what is needed to see the problem. Use simple, descriptive names for functions and variables – don’t copy the names you’re using in your existing code.
- Divide and conquer. If you’re not sure what the source of the problem is, start removing code a bit at a time until the problem disappears – then add the last part back.
Why aren't we coordinated in being strict/firm on this? I don't know.
I've heard it said that some people just like to help people with anything rep-or-no-rep. That's great, but it doesn't fall very much in line with the original goal for what Stack Overflow should be, which is its main selling point to most no-account users: a "reference manual" in Q&A form.
Maybe some answerers/reviewers just don't know that we have such a strict policy. Admittedly (and please be gentle with me), even I hadn't read the MRE page in much detail to realize that it could be applied so powerfully.
I'm interested to hear from people who have been here for a long time cleaning up poor-quality content. How "strict" are you in applying the MRE criteria to questions? Is there consensus between the veterans on how strictly/firmly it should be applied?
Our MRE help page suggests that debugging can help in creating an MRE, but nobody reads it
(including me, before I read it and realized it does)
In the "What topics can I ask about here?" help page, it says:
Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself[. See: How to create a Minimal, Reproducible Example..]
That single statement is very easy to mis-interpret/twist if you take it out of the context of the full explanation of what an MRE is: Oh! asking for debugging help is on-topic? And all I need is to chop out all the parts of my project code that aren't related to the buggy feature, paste my error message, and make a magic wish? I love this help desk! (exaggerated, but I think you get the point).
Only if the asker really goes and reads the MRE page will they see:
For more information on how to debug your program so that you can create a minimal example, Eric Lippert has written a fantastic blog post on the subject: How to debug small programs.
In relation to Shog9's post on problem-solving effort
I'm glad HenryEcker brought up the Adding "lack of effort" as a close vote reason post, because it's marked as a dup of a post where there's a response written by @Shog9 that makes this discussion even more interesting. Shog9 defined three types of effort: research effort (looking for a solution before asking), definition effort (defining a clear, specific question), and problem-solving effort. He showed that we have close reasons for the first two types of effort, but not for the third, because:
Judging problem-solving effort is really subjective. Assuming sufficient research and definition effort, you're left to make a decision as to whether or not the asker has suffered enough yet; this quickly turns into a sick Milgram experiment.
Trying to maximize effort actively subverts the purpose of this site. We're trying to create a library of reusable information here, with the idea that if someone takes the time to define their problem and then search for it they won't have to ask a question at all! When it works, any answer can go on to benefit many people beyond the person who asked the question [...] If we disallow all questions that don't require investment beyond research, we give up the ability for folks to research their problems using Stack Overflow, and end up with a library of questions so specific to their askers as to be worthless to anyone else.
Important note: My interpretation of Shog9's use of the word "specific" in his definition of definition effort is that they mean "detailed-enough"/"well-specified"/"as-opposed-to-lacking-in-focus" (the good kind of specific), and not "being a very non-generic question that has lots of contextual dependencies that others with the same generic problem will not have" (bad kind of specific, which is the sense of the word I think he is using when he says "a library of questions so specific to their askers as to be worthless to anyone else").
It's strange because debugging (at least in my mind) falls under problem-solving effort, and yet, the result of a lack of this specific type of problem-solving(?) effort, we have gotten the "bad ending" (video game terminology) instead of the "good ending": we have a library of questions so specific to their askers as to be be worthless to anyone else.
I thought about this more, and my theory (needs confirmation from Shog9) is that any amount of debugging-effort required to create an MRE, falls under the category of definition effort rather than problem-solving effort.But this isn't preventing bad questions from being asked because we're not making askers read it.
The system doesn't do a good enough job of automating/forcing askers to learn how to ask good questions before asking
The system doesn't do a good enough job of automating/forcing askers to learn how to ask good questions before asking
The current system doesn't give any clear pointer to askers to read the instructions on creating an MRE until they fail to ask a question that meets its criteriaqualifications, and the burden on evaluating that criteria falls on human volunteers.
While my frustrated mind thinks the solution must be simple, there's also a very good chance it isn't. If you'd like to know about current efforts to improve the system, go read about the Ask Wizard, Staging Ground Workflow, and the new user onboarding project (which also contains a lot of good related discussion in the form of feedback).
Other close reasons and constructive resolutions
InThere Shog9's answer to this post about whether it's okay to comment that Stack Overflow is not a code writing service, he saysare efforts that we shouldn't respondthe SE team is making to lazy questions by a lazy actionaddress this: read about the new user onboarding project, which contains a lot of good related discussion in the form of feedback, such as this one by Shog9.
Even if you don't believe these comments are inherently rude, the sheer inefficiency and dishonesty that rides their coattails has gotta be a bit off-putting. If you're worried about dishonest students, maybe start by not pulling the same lazy, manipulative crap that they are; if you want to help receptive askers, then focus on giving them something they can actually use.
There's also the Ask Wizard, Staging Ground Workflow. Unfortuneately, the current Ask Wizard seems to assume that any question is a debugging-problem question, and the Staging Ground Workflow doesn't seem to be designed to take load off of reviewers.
If we really put in the work (helping the help vampires and accepting a fate as a help desk for non-generic questions), I'll bet that due to the very nature of the questions being non-generic, even if they are solved/answered, the answers will not be useful to many people. Debugging is the process of finding bugs. Many bugs created by "non-expert"/novice programmers are due to simple mistakes.
Other close reasons and constructive resolutions