Skip to main content
17 events
when toggle format what by license comment
Aug 23, 2022 at 0:26 comment added skan Is there any easy way to do cross-validation in Bayesian Analysis or any alternative?
Apr 6, 2021 at 14:42 comment added Christian Hennig Thanks for your efforts. Chances are we shouldn't have a discussion regarding foundations here. Anyway, I agree that "the type of question isn't the same" (although this can differ between different varieties of Bayesians as well). Regarding robustness against misspecification, my impression is that there is more about this in the frequentist than in the Bayesian literature, or at least as much, beginning from the work of Huber, Hampel, Tukey in the sixties, although it is true that this is often ignored in practice. (There's more to frequentism than testing parametric point hypotheses.)
Apr 6, 2021 at 14:35 comment added Dave Harris @Lewian however, to be fair in terms of overfitting, tools such as the AIC, BIC and other information criteria reduce the risk of overfitting if you use a selection process such as step-wise regression or other alternatives.
Apr 6, 2021 at 14:34 comment added Dave Harris @Lewian the difference between the methods, however, is that the null is asserted to be the truth. One could not calculate a p-value otherwise. The assertion is consequential. See Wetzels, R., Matzke, D., Lee, M. D., Rouder, J. N., Iverson, G. J., & Wagenmakers, E.-J. (2011). Statistical evidence in experimental psychology: An empirical comparison using 855 t tests. Perspectives on Psychological Science, 6, 291-298.
Apr 6, 2021 at 14:31 comment added Dave Harris @Lewian The Frequentist null hypothesis is by force of math assumed to be true. Only one model form is really open for discussion. There will be a Bayesian hypothesis that matches the Frequentist null and alternative hypotheses, but there will be others as well, or at least there can be. As such, it is easier to capture issues such as misspecification. That permits models to be a bit more robust. Note that if the Frequentist in this example lacks the true model and the true model is not a subset of the combinations, then the Bayesian lacks it as well. (continued)
Apr 6, 2021 at 14:26 comment added Dave Harris @Lewian this should probably be a question in itself. Consider the Frequentist hypothesis $\theta\ge{5}$. The hypothesis could be read as "it is a statement of fact that $\theta\ge{5}$. Now consider a subjective Bayesian where $\theta\in\mathcal{N}(\mu,\sigma^2)$ with the same hypothesis. It could be read as "how often is $\theta\ge{5}$ or with what probability is that the case. So, for starters, the type of question isn't the same. The second issue, the combinatoric nature, allows the mixing and matching of variable combinations. (continued)
Apr 5, 2021 at 15:17 comment added Christian Hennig "The combinatoric nature of Bayesian hypotheses, rather than binary hypotheses allows for multiple comparisons when someone lacks the "true" model for null hypothesis methods." How can a hypothesis be Bayesian? Being "Bayesian" isn't about the kind of hypotheses one is interested in, or is it? Also, how is it less of a problem for a Bayesian if the assumed model is not true?
Apr 5, 2021 at 15:11 comment added Dave Harris @RichardHardy thanks for the catch. Fixed it.
Apr 5, 2021 at 15:11 history edited Dave Harris CC BY-SA 4.0
error
Apr 5, 2021 at 4:11 comment added Richard Hardy Bayesian models are intrinsically biased models: do you perhaps mean estimators rather than models? (I am thinking about model bias in terms of bias-variance trade-off.) Also, Bayesian models never less risky than alternative models? Not exactly sure what you mean by risky, but is it perhaps the opposite? I found in another answer of yours that all Bayesian estimators <...> are intrinsically the least risky way to calculate an estimator.
Apr 14, 2020 at 20:45 comment added Dave Harris @nbro No, I do not. I have not worked in neural networks in so many years that little I would say would be trustworthy.
Apr 25, 2018 at 13:15 comment added Scortchi - Reinstate Monica @AndrewM: There is an unbiased estimator of $\sigma$ in a normal model - stats.stackexchange.com/a/251128/17230.
Sep 30, 2017 at 15:03 comment added Andrew M Only a very few models (essentially a set with measure zero) permit the formation of unbiased estimators. For example, in a normal $N(\theta, \sigma^2)$ model, there is no unbiased estimator of $\sigma$. Indeed, most times we maximize a likelihood, we end up with a biased estimator.
Mar 6, 2017 at 14:56 vote accept MWB
Mar 4, 2017 at 17:18 history edited Dave Harris CC BY-SA 3.0
correct antecedent for pronoun
Mar 4, 2017 at 9:10 comment added Richard Hardy In They begin with the optimization of minimizing the variance while remaining unbiased., what is They?
Mar 3, 2017 at 22:54 history answered Dave Harris CC BY-SA 3.0