Skip to main content
Post Undeleted by starball
Post Deleted by starball
whoops. forgot to include comment-linking to the debugger FAQ page in the advice.
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129

(emphasis added). WeI think it's better to just needcomment linking to make people actually read the rulesdebugger FAQ post and the MRE page and quoting it instead of closing as a duplicate of the debugger FAQ.

(emphasis added). We just need to make people actually read the rules.

(emphasis added). I think it's better to just comment linking to the debugger FAQ post and the MRE page and quoting it instead of closing as a duplicate of the debugger FAQ.

I don't remember what I changed.
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129

Be assured that I'm not trying here to downplay the hard work that people have put into designing those FAQ posts, guiding people toward them, and closing poor questions, and the good results that those efforts have brought to the site over the years. To all those contributors, thank you! I'm trying to make a measured analysis of the problem we're trying to solve, why this technique may not be the best solutionappropriate considering our goals and, playbooks, and how the duplicate mechanism is intended to be used, and what other solutions currently exist and how they compare.

Please correct me if you find a mistake!

Here's a link. Here's a summary of the results:

, but an answer which teaches how to debug is not the same as an answer which did the debugging, found the specific problem, explained it says "questions may", and shows the solution toso it doesn't override the specific problemmore well-defined criteria laid out before.

The more code there is to go through, the less likely people can find your problem. Streamline your example in one of two ways:

  1. Restart from scratch. Create a new program, adding in only what is needed to see the problem. Use simple, descriptive names for functions and variables – don’t copy the names you’re using in your existing code.
  2. Divide and conquer. If you’re not sure what the source of the problem is, start removing code a bit at a time until the problem disappears – then add the last part back.

[...]

[...] For more information on how to debug your program so that you can create a minimal example, Eric Lippert has written a fantastic blog post on the subject: How to debug small programs.

The current system doesn't give any clear pointer to askers to read the instructions on creating an MRE until they fail to ask a question that meets its qualifications, and the burden on evaluating that criteria falls on human volunteers.

While my frustrated mind thinks the solution must be simple, there's also a very good chance it isn'tWhile my frustrated mind thinks the solution must be simple, there's also a very good chance it isn't.

There are efforts that the SE team is making to address thisThere are efforts that the SE team is making to address this: read about the new user onboarding projectnew user onboarding project, which contains a lot of good related discussion in the form of feedback, such as this one by Shog9.

There's also the Ask Wizard, and Staging Ground Workflow. Unfortuneately, the current Ask Wizard seems to assume that any question is a debugging-problem question, and the Staging Ground Workflow doesn't seem to be designed to take load off of reviewersoffload work from humans onto the system.

It is OK to edit a question to make it more general. With the power of editing comes the power to take someone’s selfish[...], very specific question, and edit it a little bit until they’re asking the more general question that hundreds of people encounter. [...] In fact, sometimes selfish, stupid[...] questions of the “do my homework” variety can be easily edited into a form where the answer will provide an extremely valuable resource for the internet at large.

Or, if appropriate, we can flag as [not reproducible / caused by typo].

That is a lotIn my experience, for many of legwork andsuch debugging-problem questions that get resolved, the resolution is not muchpretty close to write home aboutmatching the "caused by typo" close reason, which states: "While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.", but I guess that's just the way things are right nowI'm not aware of any precise and widely applicable official definition of what that means, so if in doubt, maybe don't use that close reason, or ask here on meta or in socvr.

Be assured that I'm not trying here to downplay the hard work that people have put into designing those FAQ posts, guiding people toward them, and closing poor questions, and the good results that those efforts have brought to the site over the years. To all those contributors, thank you! I'm trying to make a measured analysis of the problem we're trying to solve, why this technique may not be the best solution considering our goals and playbooks, and what other solutions currently exist and how they compare.

Please correct me if you find a mistake!

Here's a link. Here's a summary of the results:

, but an answer which teaches how to debug is not the same as an answer which did the debugging, found the specific problem, explained it, and shows the solution to the specific problem.

The more code there is to go through, the less likely people can find your problem. Streamline your example in one of two ways:

  1. Restart from scratch. Create a new program, adding in only what is needed to see the problem. Use simple, descriptive names for functions and variables – don’t copy the names you’re using in your existing code.
  2. Divide and conquer. If you’re not sure what the source of the problem is, start removing code a bit at a time until the problem disappears – then add the last part back.

[...]

For more information on how to debug your program so that you can create a minimal example, Eric Lippert has written a fantastic blog post on the subject: How to debug small programs.

The current system doesn't give any clear pointer to askers to read the instructions on creating an MRE until they fail to ask a question that meets its qualifications, and the burden on evaluating that criteria falls on human volunteers.

While my frustrated mind thinks the solution must be simple, there's also a very good chance it isn't.

There are efforts that the SE team is making to address this: read about the new user onboarding project, which contains a lot of good related discussion in the form of feedback, such as this one by Shog9.

There's also the Ask Wizard, Staging Ground Workflow. Unfortuneately, the current Ask Wizard seems to assume that any question is a debugging-problem question, and the Staging Ground Workflow doesn't seem to be designed to take load off of reviewers.

It is OK to edit a question to make it more general. With the power of editing comes the power to take someone’s selfish, very specific question, and edit it a little bit until they’re asking the more general question that hundreds of people encounter. [...] In fact, sometimes selfish, stupid questions of the “do my homework” variety can be easily edited into a form where the answer will provide an extremely valuable resource for the internet at large.

Or, if appropriate, we can flag as [not reproducible / caused by typo].

That is a lot of legwork and is not much to write home about, but I guess that's just the way things are right now.

Be assured that I'm not trying here to downplay the hard work that people have put into designing those FAQ posts, guiding people toward them, and closing poor questions, and the good results that those efforts have brought to the site over the years. To all those contributors, thank you! I'm trying to make a measured analysis of the problem we're trying to solve, why this technique may not be appropriate considering our goals, playbooks, and how the duplicate mechanism is intended to be used, and what other solutions currently exist and how they compare.

Here's a link. Here's a summary of the results:

, but it says "questions may", so it doesn't override the more well-defined criteria laid out before.

The more code there is to go through, the less likely people can find your problem. Streamline your example in one of two ways:

  1. Restart from scratch. Create a new program, adding in only what is needed to see the problem. Use simple, descriptive names for functions and variables – don’t copy the names you’re using in your existing code.
  2. Divide and conquer. If you’re not sure what the source of the problem is, start removing code a bit at a time until the problem disappears – then add the last part back.

[...] For more information on how to debug your program so that you can create a minimal example, Eric Lippert has written a fantastic blog post on the subject: How to debug small programs.

The current system doesn't give any clear pointer to askers to read the instructions on creating an MRE until they fail to ask a question that meets its qualifications, and the burden on evaluating that criteria falls on human volunteers. While my frustrated mind thinks the solution must be simple, there's also a very good chance it isn't.

There are efforts that the SE team is making to address this: read about the new user onboarding project, which contains a lot of good related discussion in the form of feedback, such as this one by Shog9.

There's also the Ask Wizard and Staging Ground Workflow. Unfortuneately, the current Ask Wizard seems to assume that any question is a debugging-problem question, and the Staging Ground Workflow doesn't seem to be designed to offload work from humans onto the system.

It is OK to edit a question to make it more general. With the power of editing comes the power to take someone’s [...], very specific question, and edit it a little bit until they’re asking the more general question that hundreds of people encounter. [...] In fact, sometimes [...] questions of the “do my homework” variety can be easily edited into a form where the answer will provide an extremely valuable resource for the internet at large.

In my experience, for many of such debugging-problem questions that get resolved, the resolution is pretty close to matching the "caused by typo" close reason, which states: "While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.", but I'm not aware of any precise and widely applicable official definition of what that means, so if in doubt, maybe don't use that close reason, or ask here on meta or in socvr.

some attempts to simplify and better organize the flow. also removed the discussion about the problem-solving effort quote from shog9 and replaced it with a single link to a much more relevant shog9 post.
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129

TL;DR
I agree, and I think our rule books do too. I'm glad we have that FAQ post. I'm glad we're "teaching to "teach askers how to fish", but a question that asks for a particular fish is not the same as a question asking how to catch any fish. The deeper problem we have to figure out how best to tackle is that we're getting tons of non-generic debugging-problem questions (that don't meet the "minimal" and "example" criteriaqualifications of an MRE) instead of generic, generally useful questions (that meet all the criteriaqualifications of an MRE). You're probably right that there are better solutions. I'll try to give an analysis of the current situation with our minimal reproducible example guidance. I'll, list some existing SE projects to improve the system, and then touch on other valid response options.

(because I have to have some SEDE data) Disclaimer: I am new to SQL, and am a human being. I might have made mistakes writing the query. PleasePlease correct me if you find a mistake!

Here's the deeper problem with our current scenario that I think is motivating the use of this technique of dup-closing to an FAQ page: We're getting treated as a debugging help desk service, and the library which was intended to be filled with generic questions that can help many people is being filled with non-generic questions that are likely only to help the original asker and perhaps a small handful of people.

If you look at the 4th and 5th tables of my first SEDE query, you'll see that for posts that get a comment linking to the debugger FAQ post (Ie. posts where someone makes a "friendly link" to it instead of dup-close-voting/flagging), you'll see that 55.07% are currently deleted, and 10.68% are closed but not deleted, and that only roughly 10% of them have positive score. They're generally not good questions: they don't meet some combination of our guidelines for writing good-question criteriaquestions. 

Just for some bonus SEDE fun, here's a graph of the current status of posts which are linked to the debugger FAQ post either in a comment, or closed as a duplicate, grouped by the creation date of the post. Note the funny (and sad) onslaught of poor questions that we get during our "eternal septembers". Here's a view of the top FAQ being used as dup targets over time. It

Looking at that data, it almost makes you think "no wonder / thank goodness people are hammering and deleting these". Using a dup hammer is way faster than getting three close-votes, which may even require probing for enough information to know which close reason to use. Are there other solutions than that? Yeah.

We can (and I think the system needs to do a better job to) tell people how to ask a good generic question, so that they know what questions aren't a good fit, and how to do a good job of asking questions that are.

We already have an instruction manual for how to write good generic questions about debugging-related problems: the help center's page on how to create a minimal, reproducible example.

We have a close reason for questions that don't meet this criteria: "needs debugging details".

So what gives?

The situation with MREs and "needs debugging details"

TL;DR: Our system doesn't automate/force new users to learn how to ask a good first (or non-first) question, and we as a community of answerers/reviewers aren't being very strict about it either.

We answerers and reviewers aren't being strict enough on the "minimal" and "example" criteria in "MRE"

What I can say confidently, is that the MRE page and the "needs debugging details" close reason state the the question should present the shortest code possible that reproduces the issue, which in my experience, very very very few askers actually do (my judgement here is very possibly flawed or subjective though). So where that close reason applies, it can be used (while using your judgement, and being welcoming, friendly, and helpfulamong other things).

The MRE help page's guidance on making an example minimal states:

The more code there is to go through, the less likely people can find your problem. Streamline your example in one of two ways:

  1. Restart from scratch. Create a new program, adding in only what is needed to see the problem. Use simple, descriptive names for functions and variables – don’t copy the names you’re using in your existing code.
  2. Divide and conquer. If you’re not sure what the source of the problem is, start removing code a bit at a time until the problem disappears – then add the last part back.

Why aren't we coordinated in being strict/firm on this? I don't know.

  • I've heard it said that some people just like to help people with anything rep-or-no-rep. That's great, but it doesn't fall very much in line with the original goal for what Stack Overflow should be, which is its main selling point to most no-account users: a "reference manual" in Q&A form.

  • Maybe some answerers/reviewers just don't know that we have such a strict policy. Admittedly (and please be gentle with me), even I hadn't read the MRE page in much detail to realize that it could be applied so powerfully.

  • I'm interested to hear from people who have been here for a long time cleaning up poor-quality content. How "strict" are you in applying the MRE criteria to questions? Is there consensus between the veterans on how strictly/firmly it should be applied?

Our MRE help page suggests that debugging can help in creating an MRE, but nobody reads it

(including me, before I read it and realized it does)

In the "What topics can I ask about here?" help page, it says:

Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself[. See: How to create a Minimal, Reproducible Example..]

That single statement is very easy to mis-interpret/twist if you take it out of the context of the full explanation of what an MRE is: Oh! asking for debugging help is on-topic? And all I need is to chop out all the parts of my project code that aren't related to the buggy feature, paste my error message, and make a magic wish? I love this help desk! (exaggerated, but I think you get the point).

Only if the asker really goes and reads the MRE page will they see:

For more information on how to debug your program so that you can create a minimal example, Eric Lippert has written a fantastic blog post on the subject: How to debug small programs.

In relation to Shog9's post on problem-solving effort

I'm glad HenryEcker brought up the Adding "lack of effort" as a close vote reason post, because it's marked as a dup of a post where there's a response written by @Shog9 that makes this discussion even more interesting. Shog9 defined three types of effort: research effort (looking for a solution before asking), definition effort (defining a clear, specific question), and problem-solving effort. He showed that we have close reasons for the first two types of effort, but not for the third, because:

  • Judging problem-solving effort is really subjective. Assuming sufficient research and definition effort, you're left to make a decision as to whether or not the asker has suffered enough yet; this quickly turns into a sick Milgram experiment.

  • Trying to maximize effort actively subverts the purpose of this site. We're trying to create a library of reusable information here, with the idea that if someone takes the time to define their problem and then search for it they won't have to ask a question at all! When it works, any answer can go on to benefit many people beyond the person who asked the question [...] If we disallow all questions that don't require investment beyond research, we give up the ability for folks to research their problems using Stack Overflow, and end up with a library of questions so specific to their askers as to be worthless to anyone else.

Important note: My interpretation of Shog9's use of the word "specific" in his definition of definition effort is that they mean "detailed-enough"/"well-specified"/"as-opposed-to-lacking-in-focus" (the good kind of specific), and not "being a very non-generic question that has lots of contextual dependencies that others with the same generic problem will not have" (bad kind of specific, which is the sense of the word I think he is using when he says "a library of questions so specific to their askers as to be worthless to anyone else").

It's strange because debugging (at least in my mind) falls under problem-solving effort, and yet, the result of a lack of this specific type of problem-solving(?) effort, we have gotten the "bad ending" (video game terminology) instead of the "good ending": we have a library of questions so specific to their askers as to be be worthless to anyone else.

I thought about this more, and my theory (needs confirmation from Shog9) is that any amount of debugging-effort required to create an MRE, falls under the category of definition effort rather than problem-solving effort.But this isn't preventing bad questions from being asked because we're not making askers read it.

The system doesn't do a good enough job of automating/forcing askers to learn how to ask good questions before asking

The system doesn't do a good enough job of automating/forcing askers to learn how to ask good questions before asking

The current system doesn't give any clear pointer to askers to read the instructions on creating an MRE until they fail to ask a question that meets its criteriaqualifications, and the burden on evaluating that criteria falls on human volunteers.

While my frustrated mind thinks the solution must be simple, there's also a very good chance it isn't. If you'd like to know about current efforts to improve the system, go read about the Ask Wizard, Staging Ground Workflow, and the new user onboarding project (which also contains a lot of good related discussion in the form of feedback).

Other close reasons and constructive resolutions

InThere Shog9's answer to this post about whether it's okay to comment that Stack Overflow is not a code writing service, he saysare efforts that we shouldn't respondthe SE team is making to lazy questions by a lazy actionaddress this: read about the new user onboarding project, which contains a lot of good related discussion in the form of feedback, such as this one by Shog9.

Even if you don't believe these comments are inherently rude, the sheer inefficiency and dishonesty that rides their coattails has gotta be a bit off-putting. If you're worried about dishonest students, maybe start by not pulling the same lazy, manipulative crap that they are; if you want to help receptive askers, then focus on giving them something they can actually use.

There's also the Ask Wizard, Staging Ground Workflow. Unfortuneately, the current Ask Wizard seems to assume that any question is a debugging-problem question, and the Staging Ground Workflow doesn't seem to be designed to take load off of reviewers.

If we really put in the work (helping the help vampires and accepting a fate as a help desk for non-generic questions), I'll bet that due to the very nature of the questions being non-generic, even if they are solved/answered, the answers will not be useful to many people. Debugging is the process of finding bugs. Many bugs created by "non-expert"/novice programmers are due to simple mistakes.

Other close reasons and constructive resolutions

TL;DR
I agree, and I think our rule books do too. I'm glad we have that FAQ post. I'm glad we're "teaching askers how to fish", but a question that asks for a particular fish is not the same as a question asking how to catch any fish. The deeper problem we have to figure out how best to tackle is that we're getting tons of non-generic debugging-problem questions (that don't meet the "minimal" and "example" criteria of an MRE) instead of generic, generally useful questions (that meet all the criteria of an MRE). You're probably right that there are better solutions. I'll try to give an analysis of the current situation with our minimal reproducible example guidance. I'll then touch on other valid response options.

(because I have to have some SEDE data) Disclaimer: I am new to SQL, and am a human being. I might have made mistakes writing the query. Please correct me if you find a mistake!

Here's the deeper problem with our current scenario that I think is motivating the use of this technique of dup-closing to an FAQ page: We're getting treated as a help desk service, and the library which was intended to be filled with generic questions that can help many people is being filled with non-generic questions that are likely only to help the original asker and perhaps a small handful of people.

If you look at the 4th and 5th tables of my first SEDE query, you'll see that for posts that get a comment linking to the debugger FAQ post (Ie. posts where someone makes a "friendly link" to it instead of dup-close-voting/flagging), you'll see that 55.07% are currently deleted, and 10.68% are closed but not deleted, and that only roughly 10% of them have positive score. They're generally not good questions: they don't meet some combination of our good-question criteria. Just for some bonus SEDE fun, here's a graph of the current status of posts which are linked to the debugger FAQ post either in a comment, or closed as a duplicate, grouped by the creation date of the post. Note the funny (and sad) onslaught of poor questions that we get during our "eternal septembers". Here's a view of the top FAQ being used as dup targets over time. It almost makes you think "no wonder / thank goodness people are hammering and deleting these". Are there other solutions than that? Yeah.

We can (and I think the system needs to do a better job to) tell people how to ask a good generic question, so that they know what questions aren't a good fit, and how to do a good job of asking questions that are.

We already have an instruction manual for how to write good generic questions about debugging-related problems: the help center's page on how to create a minimal, reproducible example.

We have a close reason for questions that don't meet this criteria: "needs debugging details".

So what gives?

The situation with MREs and "needs debugging details"

TL;DR: Our system doesn't automate/force new users to learn how to ask a good first (or non-first) question, and we as a community of answerers/reviewers aren't being very strict about it either.

We answerers and reviewers aren't being strict enough on the "minimal" and "example" criteria in "MRE"

What I can say confidently, is that the MRE page and the "needs debugging details" close reason state the the question should present the shortest code possible that reproduces the issue, which in my experience, very very very few askers actually do (my judgement here is very possibly flawed or subjective though). So where that close reason applies, it can be used (while using your judgement, and being welcoming, friendly, and helpful).

The MRE help page's guidance on making an example minimal states:

The more code there is to go through, the less likely people can find your problem. Streamline your example in one of two ways:

  1. Restart from scratch. Create a new program, adding in only what is needed to see the problem. Use simple, descriptive names for functions and variables – don’t copy the names you’re using in your existing code.
  2. Divide and conquer. If you’re not sure what the source of the problem is, start removing code a bit at a time until the problem disappears – then add the last part back.

Why aren't we coordinated in being strict/firm on this? I don't know.

  • I've heard it said that some people just like to help people with anything rep-or-no-rep. That's great, but it doesn't fall very much in line with the original goal for what Stack Overflow should be, which is its main selling point to most no-account users: a "reference manual" in Q&A form.

  • Maybe some answerers/reviewers just don't know that we have such a strict policy. Admittedly (and please be gentle with me), even I hadn't read the MRE page in much detail to realize that it could be applied so powerfully.

  • I'm interested to hear from people who have been here for a long time cleaning up poor-quality content. How "strict" are you in applying the MRE criteria to questions? Is there consensus between the veterans on how strictly/firmly it should be applied?

Our MRE help page suggests that debugging can help in creating an MRE, but nobody reads it

(including me, before I read it and realized it does)

In the "What topics can I ask about here?" help page, it says:

Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. See: How to create a Minimal, Reproducible Example.

That single statement is very easy to mis-interpret/twist if you take it out of the context of the full explanation of what an MRE is: Oh! asking for debugging help is on-topic? And all I need is to chop out all the parts of my project code that aren't related to the buggy feature, paste my error message, and make a magic wish? I love this help desk! (exaggerated, but I think you get the point).

Only if the asker really goes and reads the MRE page will they see:

For more information on how to debug your program so that you can create a minimal example, Eric Lippert has written a fantastic blog post on the subject: How to debug small programs.

In relation to Shog9's post on problem-solving effort

I'm glad HenryEcker brought up the Adding "lack of effort" as a close vote reason post, because it's marked as a dup of a post where there's a response written by @Shog9 that makes this discussion even more interesting. Shog9 defined three types of effort: research effort (looking for a solution before asking), definition effort (defining a clear, specific question), and problem-solving effort. He showed that we have close reasons for the first two types of effort, but not for the third, because:

  • Judging problem-solving effort is really subjective. Assuming sufficient research and definition effort, you're left to make a decision as to whether or not the asker has suffered enough yet; this quickly turns into a sick Milgram experiment.

  • Trying to maximize effort actively subverts the purpose of this site. We're trying to create a library of reusable information here, with the idea that if someone takes the time to define their problem and then search for it they won't have to ask a question at all! When it works, any answer can go on to benefit many people beyond the person who asked the question [...] If we disallow all questions that don't require investment beyond research, we give up the ability for folks to research their problems using Stack Overflow, and end up with a library of questions so specific to their askers as to be worthless to anyone else.

Important note: My interpretation of Shog9's use of the word "specific" in his definition of definition effort is that they mean "detailed-enough"/"well-specified"/"as-opposed-to-lacking-in-focus" (the good kind of specific), and not "being a very non-generic question that has lots of contextual dependencies that others with the same generic problem will not have" (bad kind of specific, which is the sense of the word I think he is using when he says "a library of questions so specific to their askers as to be worthless to anyone else").

It's strange because debugging (at least in my mind) falls under problem-solving effort, and yet, the result of a lack of this specific type of problem-solving(?) effort, we have gotten the "bad ending" (video game terminology) instead of the "good ending": we have a library of questions so specific to their askers as to be be worthless to anyone else.

I thought about this more, and my theory (needs confirmation from Shog9) is that any amount of debugging-effort required to create an MRE, falls under the category of definition effort rather than problem-solving effort.

The system doesn't do a good enough job of automating/forcing askers to learn how to ask good questions before asking

The system doesn't give any clear pointer to askers to read the instructions on creating an MRE until they fail to ask a question that meets its criteria, and the burden on evaluating that criteria falls on human volunteers.

While my frustrated mind thinks the solution must be simple, there's also a very good chance it isn't. If you'd like to know about current efforts to improve the system, go read about the Ask Wizard, Staging Ground Workflow, and the new user onboarding project (which also contains a lot of good related discussion in the form of feedback).

Other close reasons and constructive resolutions

In Shog9's answer to this post about whether it's okay to comment that Stack Overflow is not a code writing service, he says that we shouldn't respond to lazy questions by a lazy action:

Even if you don't believe these comments are inherently rude, the sheer inefficiency and dishonesty that rides their coattails has gotta be a bit off-putting. If you're worried about dishonest students, maybe start by not pulling the same lazy, manipulative crap that they are; if you want to help receptive askers, then focus on giving them something they can actually use.

If we really put in the work (helping the help vampires and accepting a fate as a help desk for non-generic questions), I'll bet that due to the very nature of the questions being non-generic, even if they are solved/answered, the answers will not be useful to many people. Debugging is the process of finding bugs. Many bugs created by "non-expert"/novice programmers are due to simple mistakes.

TL;DR
I agree, and I think our rule books do too. I'm glad we have that FAQ post to "teach askers how to fish", but a question that asks for a particular fish is not the same as a question asking how to catch any fish. The deeper problem we have to figure out how best to tackle is that we're getting tons of non-generic debugging-problem questions (that don't meet the "minimal" and "example" qualifications of an MRE) instead of generic, generally useful questions (that meet all the qualifications of an MRE). You're probably right that there are better solutions. I'll try to give an analysis of the current situation with our minimal reproducible example guidance, list some existing SE projects to improve the system, and then touch on other valid response options.

Please correct me if you find a mistake!

Here's the deeper problem with our current scenario that I think is motivating the use of this technique of dup-closing to an FAQ page: We're getting treated as a debugging help desk, and the library which was intended to be filled with generic questions that can help many people is being filled with non-generic questions that are likely only to help the original asker and perhaps a small handful of people.

If you look at the 4th and 5th tables of my first SEDE query, you'll see that for posts that get a comment linking to the debugger FAQ post (Ie. posts where someone makes a "friendly link" to it instead of dup-close-voting/flagging), you'll see that 55.07% are currently deleted, and 10.68% are closed but not deleted, and that only roughly 10% of them have positive score. They're generally not good questions: they don't meet some combination of our guidelines for writing good-questions. 

Just for some bonus SEDE fun, here's a graph of the current status of posts which are linked to the debugger FAQ post either in a comment, or closed as a duplicate, grouped by the creation date of the post. Note the funny (and sad) onslaught of poor questions that we get during our "eternal septembers". Here's a view of the top FAQ being used as dup targets over time.

Looking at that data, it almost makes you think "no wonder / thank goodness people are hammering and deleting these". Using a dup hammer is way faster than getting three close-votes, which may even require probing for enough information to know which close reason to use. Are there other solutions than that? Yeah.

We can tell people how to ask a good generic question, so that they know what questions aren't a good fit, and how to do a good job of asking questions that are.

We already have an instruction manual for how to write good generic questions about debugging-related problems: the help center's page on how to create a minimal, reproducible example, which (among other things) states:

The more code there is to go through, the less likely people can find your problem. Streamline your example in one of two ways:

  1. Restart from scratch. Create a new program, adding in only what is needed to see the problem. Use simple, descriptive names for functions and variables – don’t copy the names you’re using in your existing code.
  2. Divide and conquer. If you’re not sure what the source of the problem is, start removing code a bit at a time until the problem disappears – then add the last part back.

[...]

For more information on how to debug your program so that you can create a minimal example, Eric Lippert has written a fantastic blog post on the subject: How to debug small programs.

But this isn't preventing bad questions from being asked because we're not making askers read it.

The system doesn't do a good enough job of automating/forcing askers to learn how to ask good questions before asking

The current system doesn't give any clear pointer to askers to read the instructions on creating an MRE until they fail to ask a question that meets its qualifications, and the burden on evaluating that criteria falls on human volunteers.

While my frustrated mind thinks the solution must be simple, there's also a very good chance it isn't.

There are efforts that the SE team is making to address this: read about the new user onboarding project, which contains a lot of good related discussion in the form of feedback, such as this one by Shog9.

There's also the Ask Wizard, Staging Ground Workflow. Unfortuneately, the current Ask Wizard seems to assume that any question is a debugging-problem question, and the Staging Ground Workflow doesn't seem to be designed to take load off of reviewers.

Other close reasons and constructive resolutions

add link to new user onboarding project
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
add graph of top faq dup targets over time
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
update a section heading for previous updates.
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
HUGE update: The help center can't stop me because I don't know how to read. The MRE help page _actually does_ suggest that debugging will help to write an MRE.
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
I went afk and can't remember what I changed this time
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
fix typo "one" -> "on" :/
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
misc touchups
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
more updates.
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
update SEDE queries
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
some comments about possible limitations of my SEDE queries
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
Active reading [<en.wiktionary.org/wiki/nonexistent#Adjective> <en.wiktionary.org/wiki/help_desk#Noun> <en.wiktionary.org/wiki/handful#Noun> <en.wikipedia.org/wiki/Sentence_clause_structure#Run-on_sentences>]. Used a more direct cross reference (as user names can change at any time).
Source Link
Peter Mortensen
  • 31.4k
  • 4
  • 22
  • 14
Loading
Active reading [<en.wiktionary.org/wiki/nonexistent#Adjective> <en.wiktionary.org/wiki/help_desk#Noun> <en.wiktionary.org/wiki/handful#Noun> <en.wikipedia.org/wiki/Sentence_clause_structure#Run-on_sentences>]. Used a more direct cross reference (as user names can change at any time).
Source Link
Peter Mortensen
  • 31.4k
  • 4
  • 22
  • 14
Loading
Fixed the link syntax (missing left parenthesis) - as a result the diff looks more extensive than it really is - use view "Side-by-side Markdown" to compare.
Source Link
Peter Mortensen
  • 31.4k
  • 4
  • 22
  • 14
Loading
remove unfinished sentence and add option to close NDD for non-minimal example.
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
add some connections to another related MSO Q&A I just learned about
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
some formatting touchups
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading
Source Link
starball
  • 41.2k
  • 4
  • 54
  • 129
Loading