Skip to main content
8 votes

How to Define Higher-Order Terms Analogous to Expectation and Variance in Probability Theory?

The higher-order generalizations of the expectation and variance are called the cumulants, $\kappa_n(X)$. They can be defined using the logarithm of the moment generating function: $$K_X(t) = \log M_X(...
Qiaochu Yuan's user avatar
2 votes

How to find (power) distribution parameters

The maximum likelihood methods is really exactly what it says - you take the likelihood and maximize it. For numerical reasons, it is usually better to maximize the log-likelihood instead (which finds ...
Martin Modrák's user avatar
2 votes
Accepted

Are $\mu_{\hat{p}}$ and $\sigma_{\hat{p}}$ considered parameters or statistics?

There are a variety of definitions of both, but for me a "parameter" is a value that underpins the behaviour of some random variable, while a "statistic" is a value calculated from ...
ConMan's user avatar
  • 25.6k
1 vote

Approximating a discrete distribution with CLT

If iid $Y_i\sim \text{Poisson}(\lambda)$ for $i=1,2,\dots,n$ and denote their average as $\bar{Y}$, then their sum $$ n\bar{Y}\sim \text{Poisson}(n\lambda) $$ exactly without approximation. Now if ...
Zack Fisher's user avatar
1 vote
Accepted

Doubts on "An Intensive Introduction to Cryptography" exercise about Shannon's entropy

I suspect that the issue here lies at the meaning of For every one to one function $F:S \rightarrow \{0,1\}^*$ Typically one wants to deal with streams of symbols from $S$, think something like &...
Matt Werenski's user avatar
1 vote

Measuring departure between the posterior predictive distribution and the true data generating distribution

Some random remarks: Statistics questions, unless dealing with heavy probability theory, are probably much better directed at stats.stackexchange.com. Your question Suppose, I am trying to measure ...
Guillaume Dehaene's user avatar
1 vote
Accepted

An inequality for a bisected "shifted quadrant" under a continuous symmetric bivariate distribution?

It is sort of sufficient. Edited details: Let $f(x,y)$ be the density and $C_n = \{(x,y): x^2+y^2 \le n\}$. Define $$I_n := \int_{A\cap C_n} f(x,y) \mathrm{dx} \mathrm{dy}~,\quad J_n := \int_{B\cap ...
Sounak's user avatar
  • 91
1 vote

Are $\mu_{\hat{p}}$ and $\sigma_{\hat{p}}$ considered parameters or statistics?

Moments of a (parametric) distribution are parameters. That's why they are called parametric distributions. That the distribution is a sampling distribution is immaterial. For instance, if we have ...
heropup's user avatar
  • 141k
1 vote

Discretized Distributions on Rationals?

Expanded from comments: I do not see how you plan to sum (as opposed to integrate) a density (as opposed to a probability mass function) and get 1. You could create an discrete distribution on the ...
Henry's user avatar
  • 159k
1 vote

Discretized Distributions on Rationals?

I would say that "nice", "benchmark" continuous disributions tend to have regular supports - closure of the set on which its density (w.r.t. the Lebesgue measure) is positive. E.g. ...
SBF's user avatar
  • 36.2k
1 vote
Accepted

Definition of mixture of two distributions

When we specify a mixture distribution, we actually specify a conditional distribution $X \vert B$, where $B \sim \text{Ber}(p)$. That is, we specify $P(X \leq x \vert B = 0)$, and $P(X \leq x \vert B ...
MrTheOwl's user avatar
  • 574
1 vote
Accepted

Is there a distribution of values such that removing values decreases the mean?

If a finite sample $X_1, X_2, \dots X_n$ share a common finite expectation $\mu=\mathbb{E}(X_1)$, then sample size $n$ does not affect the expectation of the sample average, irrespective if $X_i$ ...
Zack Fisher's user avatar
1 vote
Accepted

Estimation of a gamma function-like integral

This is probably not the simplest answer. We need to show that $$ \frac{1}{{k!}}\int_{2k + 2}^{ + \infty } {x^k {\rm e}^{ - x} {\rm d}x} < \frac{1}{{k + 1}} $$ for $k>-1$. (I simply define $k!$ ...
Gary's user avatar
  • 33.2k
1 vote

Confusion in using applying variance formula

The variance of a sum of independent random variables $X$ and $Y$ is the sum of their variances; i.e., $$\operatorname{Var}[X+Y] \overset{\text{ind}}{=} \operatorname{Var}[X] + \operatorname{Var}[Y],$$...
heropup's user avatar
  • 141k
1 vote

The distribution of $XY+(1-X)(1-Y)$ for $X,Y$ sampled uniformly from [0,1]

As suggested in the comments, perform the transformation $$X = U + 1/2, \quad Y = V + 1/2$$ to obtain $$Z = XY + (1-X)(1-Y) = 2UV + 1/2, \\ U, V \sim \operatorname{Uniform}(-1/2,1/2). \tag{1}$$ This ...
heropup's user avatar
  • 141k

Only top scored, non community-wiki answers of a minimum length are eligible