Skip to main content

All Questions

1 vote
1 answer
76 views

Bayes classifiers with cost of misclassification

A minimum ECM classifier disciminate the features $\underline{x}$ to belong to class $t$ ($\delta(\underline{x}) = t$) if $\forall j \ne t$: $$\sum_{k\ne t} c(t|k) f_k(\underline{x})p_k \le \sum_{k\ne ...
BiasedBayes's user avatar
1 vote
0 answers
67 views

Gibbs Priors form a Martingale

I am working on adapting variational inference to the recently developed Martingale posterior distributions. The first case, which reduces the VI framework to Gibbs priors, is proving hard to show as ...
BayesRayes's user avatar
2 votes
1 answer
209 views

Sum of arrival times of Chinese Restaurant Process (CRP)

Suppose that a random sample $X_1, X_2, \ldots$ is drawn from a continuous spectrum of colors, or species, following a Chinese Restaurant Process distribution with parameter $|\alpha|$ (or ...
Grandes Jorasses's user avatar
0 votes
1 answer
89 views

Existence and uniqueness of a posterior distribution

I am wondering about the existence and uniqueness of a posterior distribution. While Bayes' theorem gives the form of the posterior, perhaps there are pathological cases (over some weird probability ...
CoilyUlver's user avatar
0 votes
0 answers
72 views

Probability distribution for a Bayesian Update

I am struggling with a process like this: $$X_t=\begin{cases} \frac{\alpha\omega_t}{\alpha\omega_t+\beta(1-\omega_t)} & \text{with prob } p\\ \frac{(1-\alpha)\omega_t}{(1-\alpha)\omega_t+(1-\beta)(...
DreDev's user avatar
  • 21
0 votes
1 answer
102 views

How does this Bayesian updating work $z_i=f+a_i+\epsilon_i$

$z_i=f+a_i+\epsilon_i$ ,where $f\sim N(\bar{f},\sigma_{f}^2)$ ; $a_i\sim N(\bar{a_{i}},\sigma_{a}^2)$; $\epsilon_i\sim N(0,\sigma_{\epsilon}^2)$. We can see the signals $\{z_i\}$ where $i\subseteq {1,...
yunfan Yang's user avatar
2 votes
0 answers
60 views

Concentration of posterior probability around a tiny fraction of the prior volume

In the context of approximating the evidence $Z$ in a Bayesian inference setting $$ Z = \int d\theta \mathcal L (\theta)\pi (\theta) $$ with $\mathcal L$ the likelihood, $\pi$ the prior, John Skilling'...
long_john's user avatar
1 vote
1 answer
136 views

Conditional Gaussians in infinite dimensions

I asked this over on cross validated, but thought it might also get an answer here: The law of the conditional Gaussian distribution (the mean and covariance) are frequently mentioned to extend to the ...
user2379888's user avatar
4 votes
2 answers
216 views

Do these distributions have a name already?

In playing with some math finance stuff I ran into the following distribution and I was curious if someone had a name for it or has studied it or worked with it already. To start, let $\Delta^n$ be ...
Jess Boling's user avatar
3 votes
1 answer
270 views

A quantity associated to a probability measure space

Let $(S,P)$ be a (finite) probability space. We associate to $(S,P)$ a quantity $n(S,P)$ as follows: The probability of two randomly chosen events $A,B\subset S$ being independent is denoted by $n(S,P)...
Ali Taghavi's user avatar
0 votes
1 answer
171 views

CLT for random variables with positive support (e.g. exponential)

I have a bunch of iid $\{X_i\}$ with $X_i \sim \exp(\lambda)$ - let's say $\lambda = 1$. Now, classic version of CLT tells me: \begin{equation} \sqrt{n}\left(1-\bar{X}_n\right) \rightarrow \mathcal{N}\...
qwert's user avatar
  • 89
0 votes
1 answer
162 views

Lower bound for reduced variance after conditioning

Let $X$ be a random variable with variance $\tau^2$ and $Y$ be another random variable such that $Y-X$ is independent of $X$ and has mean zero and variance $\sigma^2$. (One can think of $Y$ as a noisy ...
Nima's user avatar
  • 3
1 vote
1 answer
324 views

Posterior expected value for squared Fourier coefficients of random Boolean function

Let $f : \{0, 1\}^{n} \rightarrow \{-1, 1\}$ be a Boolean function. Let the Fourier coefficients of this function be given by $$ \hat f(z) = \frac{1}{2^{n}} \sum_{x \in \{0, 1\}^{n}} f(x)(-1)^{x \cdot ...
RandomMatrices's user avatar
1 vote
1 answer
2k views

Convolution of two Gaussian mixture model

Suppose I have two independent random variables $X$, $Y$, each modeled by the Gaussian mixture model (GMM). That is, $$ f(x)=\sum _{k=1}^K \pi _k \mathcal{N}\left(x|\mu _k,\sigma _k\right) $$ $$ g(y)=\...
wuhanichina's user avatar
3 votes
0 answers
164 views

Minimizing an f-divergence and Jeffrey's Rule

My question is about f-divergences and Richard Jeffrey's (1965) rule for updating probabilities in the light of partial information. The set-up: Let $p: \mathcal{F} \rightarrow [0,1]$ be a ...
jw7642's user avatar
  • 91

15 30 50 per page