5
$\begingroup$

Say, there are 3 categories being selected with probability $\theta_i$ , $i=1,2,3$. After $n$ independent multinomial trails, we observe say $n_i$ outcomes of each $i$ category.

Then, someone told me that actually we can know $\theta_2=0.1$ for certain.

Now, I am only interested in the $\theta_1$, i.e. I would like to know $E(\theta_1\mid n_1,n_2,n_3)$. To do the Bayesian inference, I see two ways:

  1. Via the conditional distribution: Conditional on $\theta_2=0.1$, I can constuct a new binomial model with probability $p=\frac{\theta_1}{1-0.1},1-p=\frac{\theta_3}{1-0.1}$, and calculate the $E(\theta_1\mid n_1, n_3)$, ignoring the evidence of $n_2$. In other words, I am observing $n_1$ out of $n_1+n_3$ trails is category $1.$

  2. Via the marginal distribution: the marginal distribution of $\theta_1$ is again a binomial one with probability $\theta_1$ and $(1-\theta_1)$. Then I can calculate $E(\theta_1\mid n_1,n_2,n_3)$. In other words, I am observing $n_1$ out of $n_1+n_2+n_3$ trails is category $1.$

Note, in both the two ways above, I can assume the same margianl prior distribution for $\theta_1$ as $f_{\theta_1}()$ (so easily, I know the $f_p()$ needed in $1.$).

At the beginning, I thought both of them should give me the same result as I cannot see anything wrong in them, but I find out they are different (by playing with a two point prior distribution for $f_{\theta_1}$).

Now I see, it seems I am ignoring some very useful information in the 2nd way, i.e. the fact I know $\theta_2=0.1$ for sure. Thus they are different.

My questions are:

  1. Are my observations/conclusions so far all correct?
  2. Is there any way to incoperate $\theta_2=0.1$ in the 2nd approach above?
  3. Generally, the 1st way of reasoning is useless, right? i.e. the property that a conditional distribution of a multinomial distribution is still a multinomial is useless. Since in realisty we barely know something like $\theta_2=0.1$ for sure. Or is there any good example of application here?

Hope I am making sense...Thanks.

$\endgroup$

1 Answer 1

0
$\begingroup$

1) No, you missed a detail.

2) Yes!

3) We may not know θ=0.1 for sure, however, we can always put a prior on it and update the prior based on observations. E.g. at first, we may believe a dice is unbiased, but after several observations, it occurs to us that it may be biased, and then we can calculate the posterior based on our observations (likelihood).

Your calculation in the conditional distribution is correct, however, don't forget that after $ p = \frac{\theta_1}{1-0.1} $ we have $\theta_1=0.9p$. if $p= \frac{n_1}{n_1+n_3} $ then $ \theta_1= \frac{0.9n_1}{n_1+n_3}.$ (No more $ \frac{n_1}{n_1+n_3} $ !!)

In the marginal distribution, it should be written as $$ \theta_1 = \frac{n_1}{n_1+n_2+n_3}...(1)$$ subject to $$\frac{n_2}{n_1+n_2+n_3} = 0.1...(2)$$ from (2) we have $$n_2 = \frac{n_1+n_3}{9}...(3)$$ plug (3) into (1) we have $$ \theta_1 = \frac{n_1}{n_1+n_3+\frac{n_1+n_3}{9}} = \frac{0.9n_1}{n_1+n_3} $$

The conclusion is the same as before!

P.S. This question is like a variant of the famous Monty Hall problem: instead of showing you a goat behind a door, here the host only tells you the probability of a goat.

$\endgroup$
1
  • $\begingroup$ Hey could you please explain why did you vote down it $\endgroup$ Commented Jan 13, 2020 at 3:58

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .