Skip to main content
added 41 characters in body
Source Link
AdamO
  • 63.6k
  • 6
  • 129
  • 264

To put it briefly, this is now an operational question. Your observations are sound. In fact, nearly every statistician has (or should) consider the exact problem as you've described it. Closely tied to your question is doubt against the magic 0.05 threshold for significance.

The solution of a "p"-value was never posited as an omnibus for statistical reasoning. Rather, due to several contemporaneous factors, it was widely popular from its outset, and this popularity so heavily bent thinking toward its approach that now it is very hard to introduce any new "standard" inferential approach. Any approachattempt to do otherwise very quickly devolves to "back of the envelope" statisticalsignificance testing.

WhenFor instance, when I have submitted manuscripts that report a frequentist confidence interval in lieu of a p-value, the reviewers are very quick to inspect and comment whether the values do (or do not) intersect 0 or 1 for differences or ratios respectively even when a hypothesis is not called into question. Even for credible intervals, the reviewers are concerned with using a non-informative prior, in which case for them "non-informative" means the operating characteristics of the credible interval very closely resemble those of the frequentist confidence interval, and thus we come back to square 1one.

Indeed, several SEVERAL alternative procedures have been put forward. Bayes factors are probably the most noteworthy. But once one starts inspecting the frequentist operating characteristics of Bayes factor testing, it doesn't stand against a p-value. We have to begin by the thoughtrealization that a "null" hypothesis is usually a ridiculous presumption. But scientists aren't ripe enough for more nuanced thinking on this yet.

To put it briefly, this is now an operational question. Your observations are sound. In fact, nearly every statistician has (or should) consider the exact problem as you've described it. Closely tied to your question is doubt against the magic 0.05 threshold for significance.

The solution of a "p"-value was never posited as an omnibus for statistical reasoning. Rather, due to several contemporaneous factors, it was widely popular from its outset, and this popularity so heavily bent thinking toward its approach that now it is very hard to introduce any new "standard" inferential approach. Any approach to do otherwise very quickly devolves to "back of the envelope" statistical testing.

When I have submitted manuscripts that report a frequentist confidence interval, the reviewers are very quick to inspect and comment whether the values do (or do not) intersect 0 or 1 for differences or ratios respectively even when a hypothesis is not called into question. Even for credible intervals, the reviewers are concerned with using a non-informative prior, in which case for them "non-informative" means the operating characteristics of the credible interval very closely resemble those of the frequentist confidence interval, and thus we come back to square 1.

Indeed, several SEVERAL alternative procedures have been put forward. Bayes factors are probably the most noteworthy. But once one starts inspecting the frequentist operating characteristics of Bayes factor testing, it doesn't stand against a p-value. We have to begin by the thought that a "null" hypothesis is usually a ridiculous presumption. But scientists aren't ripe enough for more nuanced thinking on this yet.

To put it briefly, this is now an operational question. Your observations are sound. In fact, nearly every statistician has (or should) consider the exact problem as you've described it. Closely tied to your question is doubt against the magic 0.05 threshold for significance.

The solution of a "p"-value was never posited as an omnibus for statistical reasoning. Rather, due to several contemporaneous factors, it was widely popular from its outset, and this popularity so heavily bent thinking toward its approach that now it is very hard to introduce any new "standard" inferential approach. Any attempt to do otherwise very quickly devolves to "back of the envelope" significance testing.

For instance, when I have submitted manuscripts that report a frequentist confidence interval in lieu of a p-value, the reviewers are very quick to inspect and comment whether the values do (or do not) intersect 0 or 1 for differences or ratios respectively even when a hypothesis is not called into question. Even for credible intervals, the reviewers are concerned with using a non-informative prior, in which case for them "non-informative" means the operating characteristics of the credible interval very closely resemble those of the frequentist confidence interval, and thus we come back to square one.

Indeed, several SEVERAL alternative procedures have been put forward. Bayes factors are probably the most noteworthy. But once one starts inspecting the frequentist operating characteristics of Bayes factor testing, it doesn't stand against a p-value. We have to begin by the realization that a "null" hypothesis is usually a ridiculous presumption. But scientists aren't ripe enough for more nuanced thinking on this yet.

added 9 characters in body
Source Link
AdamO
  • 63.6k
  • 6
  • 129
  • 264

To put it briefly, this is now an operational question. Your observations are sound. In fact, nearly every statistician has (or should) consider the exact problem as you've described it. Closely tied to your question is doubt against the magic 0.05 threshold for significance.

The solution of a "p"-value was never posited as an omnibus for statistical reasoning. Rather, due to several contemporaneous factors, it was widely popular from its outset, and this popularity so heavily bent thinking toward its approach that now it is very hard to introduce any new "standard" inferential approach. Any approach to do otherwise very quickly devolves to "back of the envelope" statistical testing.

When I have submitted manuscripts that report a frequentist confidence interval, the reviewers are very quick to inspect and comment whether the values do (or do not) intersect 0 or 1 for differences or ratios respectively even when a hypothesis is not called into question. Even for credible intervals, the reviewers are concerned with using a non-informative prior, in which case for them "non-informative" means the operating characteristics of the credible interval very closely resemble those of the frequentist confidence interval, and thus we come back to square 1.

Indeed, several SEVERAL alternative procedures have been put forward. Bayes factors are probably the most noteworthy. But once one starts inspecting the frequentist operating characteristics of Bayes factor testing, it doesn't stand against a p-value. We have to begin by the thought that a "null" hypothesis is usually a ridiculous presumption. But scientists aren't ripe enough for more nuanced thinking on this yet.

To put it briefly, this is now an operational question. Your observations are sound. In fact, nearly every statistician has (or should) consider the exact problem as you've described it.

The solution of a "p"-value was never posited as an omnibus for statistical reasoning. Rather, due to several contemporaneous factors, it was widely popular from its outset, and this popularity so heavily bent thinking toward its approach that now it is very hard to introduce any new "standard" inferential approach. Any approach to do otherwise very quickly devolves to "back of the envelope" statistical testing.

When I have submitted manuscripts that report a frequentist confidence interval, the reviewers are very quick to inspect and comment whether the values do (or do not) intersect 0 or 1 for differences or ratios respectively even when a hypothesis is not called into question. Even for credible intervals, the reviewers are concerned with using a non-informative prior, in which case "non-informative" means the operating characteristics of the credible interval very closely resemble those of the frequentist confidence interval, and thus we come back to square 1.

Indeed, several SEVERAL alternative procedures have been put forward. Bayes factors are probably the most noteworthy. But once one starts inspecting the frequentist operating characteristics of Bayes factor testing, it doesn't stand against a p-value. We have to begin by the thought that a "null" hypothesis is usually a ridiculous presumption. But scientists aren't ripe enough for more nuanced thinking on this yet.

To put it briefly, this is now an operational question. Your observations are sound. In fact, nearly every statistician has (or should) consider the exact problem as you've described it. Closely tied to your question is doubt against the magic 0.05 threshold for significance.

The solution of a "p"-value was never posited as an omnibus for statistical reasoning. Rather, due to several contemporaneous factors, it was widely popular from its outset, and this popularity so heavily bent thinking toward its approach that now it is very hard to introduce any new "standard" inferential approach. Any approach to do otherwise very quickly devolves to "back of the envelope" statistical testing.

When I have submitted manuscripts that report a frequentist confidence interval, the reviewers are very quick to inspect and comment whether the values do (or do not) intersect 0 or 1 for differences or ratios respectively even when a hypothesis is not called into question. Even for credible intervals, the reviewers are concerned with using a non-informative prior, in which case for them "non-informative" means the operating characteristics of the credible interval very closely resemble those of the frequentist confidence interval, and thus we come back to square 1.

Indeed, several SEVERAL alternative procedures have been put forward. Bayes factors are probably the most noteworthy. But once one starts inspecting the frequentist operating characteristics of Bayes factor testing, it doesn't stand against a p-value. We have to begin by the thought that a "null" hypothesis is usually a ridiculous presumption. But scientists aren't ripe enough for more nuanced thinking on this yet.

Source Link
AdamO
  • 63.6k
  • 6
  • 129
  • 264

To put it briefly, this is now an operational question. Your observations are sound. In fact, nearly every statistician has (or should) consider the exact problem as you've described it.

The solution of a "p"-value was never posited as an omnibus for statistical reasoning. Rather, due to several contemporaneous factors, it was widely popular from its outset, and this popularity so heavily bent thinking toward its approach that now it is very hard to introduce any new "standard" inferential approach. Any approach to do otherwise very quickly devolves to "back of the envelope" statistical testing.

When I have submitted manuscripts that report a frequentist confidence interval, the reviewers are very quick to inspect and comment whether the values do (or do not) intersect 0 or 1 for differences or ratios respectively even when a hypothesis is not called into question. Even for credible intervals, the reviewers are concerned with using a non-informative prior, in which case "non-informative" means the operating characteristics of the credible interval very closely resemble those of the frequentist confidence interval, and thus we come back to square 1.

Indeed, several SEVERAL alternative procedures have been put forward. Bayes factors are probably the most noteworthy. But once one starts inspecting the frequentist operating characteristics of Bayes factor testing, it doesn't stand against a p-value. We have to begin by the thought that a "null" hypothesis is usually a ridiculous presumption. But scientists aren't ripe enough for more nuanced thinking on this yet.