Skip to main content

Questions tagged [posterior]

In Bayesian statistics, the term 'posterior' refers to the probability distribution of a parameter conditioned on the observed data.

3 votes
1 answer
219 views

Basic question about deriving MAP estimator

Say we have a random process $X(t, u)$ parametrized by $t$ and $u$ that generates data $x$. We also have a prior on $u$, $p(u)$. Am I correct in stating that the expression to find the maximum a ...
DangerousTim's user avatar
1 vote
0 answers
27 views

how can predictive distributions be considered as expectations?

I guess that the prior and posterior predictive distributions can be considered expectation of $p(y|\theta )$ (in case of prior predictive distribution) and $p(\widetilde{y}|\theta )$ (in case of ...
Sherlock_Hound's user avatar
0 votes
0 answers
11 views

two-step gibbs sampling vs block gibbs sampling

While reading Bayesian-related technical articles, I can see algorithms such as two-step Gibbs sampling and block gibbs sampling ...
user3269's user avatar
  • 5,222
1 vote
1 answer
29 views

known variance in conjugate normal

$Posterior\ mean=\frac{1}{\frac{1}{\sigma_{0}^{2}} + \frac{n}{\sigma^{2}}}\left( \frac{\mu_{0}}{\sigma_{0}^{2}} + \frac{\sum_{i=1}^{n} x_i}{\sigma^2} \right)$ Using this updating equation with known ...
hovjr's user avatar
  • 63
1 vote
0 answers
19 views

Bayes factor for hypothesis

I am studying Bayesian hypothesis testing and I want to calculate the Bayes factor for \begin{align*} H_0: \lambda = 1 \hspace{0.2cm} vs \hspace{0.2cm} H_1:\lambda > 2 \end{align*} with $p(\...
daniel's user avatar
  • 155
0 votes
0 answers
26 views

How to obtain likelihood ($P(B/R)$ given the prior $P(R)$ and the posterior $P(R/B)$

I am working on a topic related to multiple-choice response. I would like to measure the efficiency of the information source (or a student’s information search) and I believe Bayesian statistics is ...
Francisco 's user avatar
2 votes
0 answers
39 views

Likelihood from posterior [closed]

This question is strange and perhaps silly but it would be very useful for my research. Is there any method to find the likelihood given a prior distribution and its corresponding posterior ...
Francisco 's user avatar
1 vote
1 answer
47 views

How to choose what to integrate out or what to condition on for marginal distributions?

I am trying to work out the Bayesian posteriors of $\theta$, $\tau$ and the $\varepsilon$ in the following model: $$y(t) = \phi(t,\tau)\theta+v(t),$$ where $\{v(t)\}$ is an iid sequence of random ...
MJPeel's user avatar
  • 11
0 votes
1 answer
85 views

Normal approximation for posterior distribution

I am reading the example 4.3.3 of "The Bayesian Choice" by Christian P. Robert and I was wondering if it is possible to obtain a normal approximation in this case to estimate the posterior. ...
daniel's user avatar
  • 155
0 votes
1 answer
30 views

Interaction: Posterior comparison brms, difference between as_draws, and posterior_predict. Is it correct to interpret posterior_predict instead?

I have an interaction effect in my model, and I want to extract the posterior of each of my parameter in order to compare them and make inference about them. I couldn't simply use the as_draws() ...
Guillaume Pech's user avatar
1 vote
0 answers
37 views

How to prove the posterior probability for multivariate case (i.e. dimension $d\ge 2$)?

Suppose there are $k$ groups, $\pi_1, \pi_2, \cdots, \pi_k$, with the probability density function for group $\pi_i$ being $f_i(\boldsymbol{x})$, where $\boldsymbol x\in R^d$. The prior probability ...
John Stone's user avatar
1 vote
0 answers
22 views

How to leverage the separable functions in MCMC sampling? [closed]

I'm considering the posterior of a parametric model via the Bayesian approach. More specificity, I have a parametric model $u(p_1,p_2, p_3) = u_1(p_1) \times u_2(p_2) \times u_3(p_3)$ and I want to ...
CC Kuo's user avatar
  • 11
0 votes
0 answers
27 views

Overcoming posterior correlation for a model with random effects (for a Gibbs sampler)

I am trying to infer parameters for a model of case numbers of different infectious diseases in different locations over time. The model is $$ \log \left(1 + y_{ijt}\right)\sim\mathsf{Normal}\left(\mu ...
Till Hoffmann's user avatar
5 votes
1 answer
55 views

Can we get probabilistic predictions evaluable by proper scoring rules from bayesian inference without evaluating the marginal likelihood?

Let's say we have a vector of inputs, $X=[x_0,\dots, x_{n-1}]$, and a vector of outputs, $Y=[y_0, \dots, y_{n-1}]$. We would like to predict the distribution of a new output ,$\hat{y}$, given a new ...
QMath's user avatar
  • 451
1 vote
0 answers
28 views

Mutual Information decay

Consider $m$ channels indexed by $i$ with $1 \leq i \leq m$. The input alphabets are from the same finite set $\mathcal{X}$. Let $\pi$ denote a probability distribution on $\mathcal{X}$. Define the ...
Sushant Vijayan's user avatar

15 30 50 per page
1
2 3 4 5
68