Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

8
  • $\begingroup$ You may be interested in this related question. $\endgroup$ Commented Jun 25 at 16:02
  • 3
    $\begingroup$ What I personally find more concerning is when people really want to use a parametric test to answer question X, then find that the prerequisites are not given, so they use a nonparametric alternative... which actually answers a different question Y: stats.stackexchange.com/a/624494/1352 $\endgroup$ Commented Jun 25 at 16:49
  • 2
    $\begingroup$ Given Gelman's explanation I think his claim is fine. The terminology (accept the null) I don't agree with but this is more an issue of phrasing not of the practical consequences of failure to reject (where he's quite clear about his actions). $\endgroup$
    – Glen_b
    Commented Jun 26 at 5:17
  • 3
    $\begingroup$ Just to stir up an already complicated and contentious question: I too still see people agonising about these matters and want to urge that this supposed dilemma -- parametric or non-parametric tests -- is largely a throwback to the 1950s. Using transformed scales and/or generalized linear models and/or using confidence intervals rather than significance tests go a long way towards many better analyses (though not all). This is not to deny the importance of laying out a plan of analysis in advance which is by far best practice and strongly advisable for say clinical trials. $\endgroup$
    – Nick Cox
    Commented Jun 26 at 10:43
  • 5
    $\begingroup$ In practice, it needs massive reform of scientific publication to get people to be explicit and honest about all the decisions they made -- e.g. about missing values, outliers, nonlinearity, etc. etc. -- and I am not even convinced that would be ideal. It would lead to unreadable papers no-one wants to read. There are many studies with more or less the right conclusions for the wrong reasons (e.g. citing significance tests that are irrelevant or ornamental). Reproducibility is important, but having access to others' data and being able to carry your own analyses are the largest part of that. $\endgroup$
    – Nick Cox
    Commented Jun 26 at 10:51