8
votes
How to Define Higher-Order Terms Analogous to Expectation and Variance in Probability Theory?
The higher-order generalizations of the expectation and variance are called the cumulants, $\kappa_n(X)$. They can be defined using the logarithm of the moment generating function:
$$K_X(t) = \log M_X(...
2
votes
How to find (power) distribution parameters
The maximum likelihood methods is really exactly what it says - you take the likelihood and maximize it. For numerical reasons, it is usually better to maximize the log-likelihood instead (which finds ...
2
votes
Accepted
Are $\mu_{\hat{p}}$ and $\sigma_{\hat{p}}$ considered parameters or statistics?
There are a variety of definitions of both, but for me a "parameter" is a value that underpins the behaviour of some random variable, while a "statistic" is a value calculated from ...
1
vote
Approximating a discrete distribution with CLT
If iid $Y_i\sim \text{Poisson}(\lambda)$ for $i=1,2,\dots,n$ and denote their average as $\bar{Y}$, then their sum
$$ n\bar{Y}\sim \text{Poisson}(n\lambda)
$$
exactly without approximation. Now if ...
1
vote
Accepted
Doubts on "An Intensive Introduction to Cryptography" exercise about Shannon's entropy
I suspect that the issue here lies at the meaning of
For every one to one function $F:S \rightarrow \{0,1\}^*$
Typically one wants to deal with streams of symbols from $S$, think something like &...
1
vote
Measuring departure between the posterior predictive distribution and the true data generating distribution
Some random remarks:
Statistics questions, unless dealing with heavy probability theory, are probably much better directed at stats.stackexchange.com.
Your question
Suppose, I am trying to measure ...
1
vote
Accepted
An inequality for a bisected "shifted quadrant" under a continuous symmetric bivariate distribution?
It is sort of sufficient.
Edited details:
Let $f(x,y)$ be the density and $C_n = \{(x,y): x^2+y^2 \le n\}$. Define
$$I_n := \int_{A\cap C_n} f(x,y) \mathrm{dx} \mathrm{dy}~,\quad J_n := \int_{B\cap ...
1
vote
Are $\mu_{\hat{p}}$ and $\sigma_{\hat{p}}$ considered parameters or statistics?
Moments of a (parametric) distribution are parameters. That's why they are called parametric distributions. That the distribution is a sampling distribution is immaterial.
For instance, if we have ...
1
vote
Discretized Distributions on Rationals?
Expanded from comments:
I do not see how you plan to sum (as opposed to integrate) a density (as opposed to a probability mass function) and get 1. You could create an discrete distribution on the ...
1
vote
Discretized Distributions on Rationals?
I would say that "nice", "benchmark" continuous disributions tend to have regular supports - closure of the set on which its density (w.r.t. the Lebesgue measure) is positive. E.g. ...
1
vote
Accepted
Definition of mixture of two distributions
When we specify a mixture distribution, we actually specify a conditional distribution $X \vert B$, where $B \sim \text{Ber}(p)$. That is, we specify $P(X \leq x \vert B = 0)$, and $P(X \leq x \vert B ...
1
vote
Accepted
Is there a distribution of values such that removing values decreases the mean?
If a finite sample $X_1, X_2, \dots X_n$ share a common finite expectation $\mu=\mathbb{E}(X_1)$, then sample size $n$ does not affect the expectation of the sample average,
irrespective if $X_i$ ...
1
vote
Accepted
Estimation of a gamma function-like integral
This is probably not the simplest answer. We need to show that
$$
\frac{1}{{k!}}\int_{2k + 2}^{ + \infty } {x^k {\rm e}^{ - x} {\rm d}x} < \frac{1}{{k + 1}}
$$
for $k>-1$. (I simply define $k!$ ...
1
vote
Confusion in using applying variance formula
The variance of a sum of independent random variables $X$ and $Y$ is the sum of their variances; i.e.,
$$\operatorname{Var}[X+Y] \overset{\text{ind}}{=} \operatorname{Var}[X] + \operatorname{Var}[Y],$$...
1
vote
The distribution of $XY+(1-X)(1-Y)$ for $X,Y$ sampled uniformly from [0,1]
As suggested in the comments, perform the transformation $$X = U + 1/2, \quad Y = V + 1/2$$ to obtain
$$Z = XY + (1-X)(1-Y) = 2UV + 1/2, \\
U, V \sim \operatorname{Uniform}(-1/2,1/2). \tag{1}$$
This ...
Only top scored, non community-wiki answers of a minimum length are eligible
Related Tags
probability-distributions × 28361probability × 16774
probability-theory × 7832
statistics × 5232
random-variables × 2813
normal-distribution × 2216
expected-value × 1337
integration × 978
conditional-probability × 964
density-function × 961
uniform-distribution × 936
stochastic-processes × 740
measure-theory × 696
poisson-distribution × 631
exponential-distribution × 592
statistical-inference × 571
binomial-distribution × 529
calculus × 470
combinatorics × 444
independence × 410
real-analysis × 397
solution-verification × 373
moment-generating-functions × 364
conditional-expectation × 354
probability-limit-theorems × 333