0
$\begingroup$

I'm using the CausalImpact package (in R), and (as I expect is typical) the findings are very sensitive to the prior being used.

I have an OK understanding, I think, of what the prior is doing in this model, but I'm still struggling to determine theoretically what the prior SD should be precisely, especially since the result is so sensitive to small changes. In the context I'm working in, my colleagues are somewhat hesitant about moving away from the default (even though the data we're working on differs from the setting the default is designed for). There is also the concern that the client will be suspicious of anything that could be interpreted as "biasing" the result.

The idea I'm considering is running the model with a variety of prior level SDs, for each of the data-points individually in the second-half of the pre-intervention period and then (for each prior level SD) extracting and aggregating all the prediction errors from all of the analyses into some score measuring performance (perhaps the Mean Squared Prediction Error). Then I could select the prior SD which led to the lowest MSPE (or some related measure) when running analyses in the pre-intervention periods.

Would this be a legitimate method? Is there some other more principled way of selecting priors in this context?

$\endgroup$

0

Browse other questions tagged or ask your own question.