Skip to main content

All Questions

8 votes
1 answer
207 views

How to match my prior beliefs to beta distribution?

I have some data that I believe comes from the binomial distribution. I also have some old data from a past-experiment that I would like to base my prior beliefs on. The old data observations are: $$6,...
Ewan McGregor's user avatar
2 votes
1 answer
66 views

Centering Priors on MLEs vs. Using MLEs as Initial Conditions for MCMC [duplicate]

Here: Centering prior distributions on MLE/OLS estimates I ask about centering priors on MLEs in the context of a logistic regression (in my case with only categorical predictors), which I've seen a ...
compbiostats's user avatar
  • 1,579
1 vote
1 answer
525 views

MLE ≠ MAP under Gaussian Prior?

I saw a post on why MLE and MAP yield the same result when under uniform prior. But, I was wondering about the case when they are under Gaussian Prior. I suppose they are different in this case but I ...
jimmy1998's user avatar
1 vote
0 answers
53 views

Using a different (but related) hypothesis for the prior in MAP

Say we have a general set of data $\mathcal{D} = \{\mathbf{x}_i, \mathbf{y}_i \}_{i \in N}$ of covariates $\mathbf{x}$ and observations $\mathbf{y}$. Our problem is in fitting a known model $\mathbf{y}...
alexchanson's user avatar
1 vote
1 answer
200 views

Is the prior in Bayes formula a probability or it can also represent a probability distribution?

Given the Bayes formula: $$ p(\theta|D) = \dfrac{p(D|\theta)p(\theta)}{p(D)} $$ If there is a distribution (let's say $g$) over the parameter $\theta$, how should one rewrite the Bayes formula? $D$ is ...
Ash's user avatar
  • 115
6 votes
1 answer
823 views

Can the posterior mean always be expressed as a weighted sum of the maximum likelihood estimate and the prior mean?

See this question. Is this always true? Can the posterior mean always be expressed as a weighted sum of the maximum likelihood estimate and the prior mean (after choosing some appropriate prior)?
helperFunction's user avatar
4 votes
2 answers
519 views

How does Prior Variance Affect Discrepancy between MLE and Posterior Expectation

Suppose that $\theta\in R$ is a parameter of interest, $p(\theta)$ is our prior belief regarding $\theta$, and $\hat \theta$ is the MLE for theta derived from the data $x$. It is my understanding that ...
David's user avatar
  • 1,276
3 votes
2 answers
223 views

Can an improper prior distribution be informative?

I have just worked through an example where, with an improper prior, the bayesian estimator equals the maximum likelihood estimator, leading me to believe that improper priors are uninformative. But ...
David's user avatar
  • 1,276
2 votes
0 answers
748 views

Posterior distribution of Bernoulli distribution

The pdf of X | $\theta$ is given by $\theta^x (1- \theta)^{1-x}$ and its prior distribution is given by $p(\theta) \frac {1} {B(\alpha, \beta)} \theta^{\alpha - 1} (1 - \theta)^{\beta - 1}$ where $...
Maheem Bhatia's user avatar
3 votes
2 answers
786 views

Are "improper uniform priors" in Bayesian analysis equivalent to maximum likelihood estimations?

The improper uniform distribution for parameter $\theta$ is : $p(\theta)=1,\ for -\infty<\theta<\infty$. It is called "improper" since it does not integrate to 1. Because Bayesian theorem is ...
T X's user avatar
  • 1,037
2 votes
1 answer
336 views

How does L2 penalize large weights

The L2 regularization term is useful because it penalizes large weights over smaller weights which is good to prevent overfitting. I'm having a hard time understanding how exactly it does this. This ...
buydadip's user avatar
  • 123
9 votes
1 answer
6k views

MAP estimation as regularisation of MLE

Going through the Wikipedia article on Maximum a posteriori estimation, it got confusing after reading this: It is closely related to the method of maximum likelihood (ML) estimation, but employs ...
naive's user avatar
  • 1,049
1 vote
2 answers
310 views

Tossing coin and classical ML estimate

I'm reading Bishop's Pattern recognition and came across with the next on the p.23: Suppose, for instance, that a fair-looking coin is tossed three times and lands heads each time. A classical ...
amplifier's user avatar
  • 111
4 votes
2 answers
1k views

conjugate prior: is ever the best choice?

I'm reading about the conjugate prior of classic probability distributions (e.g. beta distribution for binomial distribution); it's explained just as "algebric trick" to have easier calculation in ...
volperossa's user avatar
12 votes
2 answers
4k views

When does the maximum likelihood correspond to a reference prior?

I have been reading James V. Stone's very nice books "Bayes' Rule" and "Information Theory". I want to know which sections of the books I did not understand and thus need to re-...
Chill2Macht's user avatar
  • 6,369

15 30 50 per page