Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

13
  • 1
    $\begingroup$ I would also imagine BIC is also equivalent to a "longer" forecast (m-step ahead), given its link to leave k out cross validation. For 200 observations though, probably doesn't make much difference (penalty of 5p instead of 2p). $\endgroup$ Commented Feb 25, 2015 at 13:11
  • 1
    $\begingroup$ @CagdasOzgenc, I asked Rob J. Hyndman regarding whether cross validation is likely to systematically favour too-parsimonious models in the context given in the OP and got a confirmation, so that is quite encouraging. I mean, the idea I was trying to explain in the chat seems to be valid. $\endgroup$ Commented Feb 26, 2015 at 7:10
  • $\begingroup$ There are theoretical reasons for favoring AIC or BIC since if one starts with likelihood and information theory, then metric which is based on those has well known statistical properties. But often it is that one is dealing with data set which is not so large. $\endgroup$
    – Analyst
    Commented Jun 15, 2018 at 19:41
  • 3
    $\begingroup$ I've spend a fair amount of time trying to understand AIC. The equality of the statement is based on numerous approximations that amount to versions of the CLT. I personally think this makes AIC very questionable for small samples. $\endgroup$
    – meh
    Commented Jun 15, 2018 at 20:39
  • 1
    $\begingroup$ @IsabellaGhement, why should it? There is no reason to restrict ourselves to this particular use of cross validation. This is not to say that cross validation cannot be used for model assessment, of course. $\endgroup$ Commented Dec 8, 2018 at 19:45