Skip to main content

All Questions

2 votes
0 answers
46 views

No Existence of Efficient estimator

I need to prove that given $(X_1,...,X_n)$ from the density $$\frac{1}{\theta}x^{\frac{1}{\theta}-1}1_{(0,1)}$$ no efficient estimator exists for $g(\theta)$=$\frac{1}{{\theta}+1}$. I have shown that ...
Onofrio Olivieri's user avatar
4 votes
2 answers
124 views

Must maximum likelihood method be applied on a simple random sample or on a realisation?

I guess my trouble is not a big one but here it is: when one applies maximum likelihood, he considers the realization $(x_1, \dots, x_n)$ of a simple random sample (SRS), leading to ML Estimates. But ...
MysteryGuy's user avatar
1 vote
1 answer
34 views

Why can we get better asymptotic global estimators even for IID random variables?

Let $X_1,...,X_N$ be IID random variables sampled from a parametrised distribution $p_\theta$, and suppose my goal is to retrieve $\theta$ from these samples. We know that the MLE provides an ...
glS's user avatar
  • 383
0 votes
0 answers
18 views

Demonstrating $SU=U(\sigma^2 I+D^2)$ as a Sufficient Condition in Maximum Likelihood Estimation

I am working on an exercise related to maximum likelihood estimation (in the context of principal component analysis) for the distribution $$p(x) = Gauss(b, WW^T+\sigma^2I)$$ In particular, I want to ...
Andrea's user avatar
  • 153
5 votes
2 answers
128 views

Sufficient conditions for asymptotic efficiency of MLE

Maximum-likelihood estimators are, according to Wikipedia, asymptotically efficient, that is they achieve the Cramér-Rao bound when sample size tends to infinity. But this seems to require some ...
Luis Mendo's user avatar
  • 1,099
2 votes
1 answer
86 views

Maximum Likelihood Estimation for a Unique Probability Density Function

In the context of estimating parameters for a uniquely distributed set of independent and identically distributed random variables, I am examining the following probability density function $ f(x|\...
Occhima's user avatar
  • 425
1 vote
0 answers
33 views

Large samples property of bayes procedures

I was reading through Wasserman's All of Statistics and I came across this property in the Bayesian statistics chapter: I think I don't really get what is supposed to be the intuition behind it, and ...
DeadKarlMarx's user avatar
0 votes
0 answers
29 views

How to derive the (partial) maximum likelihood estimator for a simple autoregressive model

I am trying to derive two maximum likelihood estimators which I have seen in a statistics book, but I am unable to derive them and would really like some help. It goes like this: Consider the simple ...
Rstrobaek's user avatar
1 vote
1 answer
69 views

Does increasing number of observations lead to the decreasing of Mean Square Error of consistent estimators?

I know that not all weakly consistent estimators exhibit MSE-consistency : https://stats.stackexchange.com/a/610835/397467. Anyway, does increasing the sample size leads to a reduction in their mean ...
whn's user avatar
  • 11
1 vote
1 answer
78 views

Finding the Variance of the MLE Variance of a Joint Normal Distribution

I have a random sampling of $Z_1,...Z_n$ from a normal distribution $N(\mu,\sigma^{2})$. I am considering them within a joint likelihood function. I know that the MLE ($\hat\sigma^{2}$) of $\sigma^{2}$...
Squarepeg's user avatar
0 votes
0 answers
41 views

Hierarchical models: Estimating variance and combining two estimators

Assume that $y_i \sim N(50,10)$. I observe a signal with additive Gaussian noise $s_i \sim N(y_i, \sigma_d^2)$ I observe $n$ such signals, each corresponding to a different $y_i$. I want to estimate $\...
mo si's user avatar
  • 1
1 vote
2 answers
207 views

Bayesian Learning: Finding the variance of noise

Suppose $x_i \sim N(10,4)$ - ie, the distribution is known. There is a noisy signal $s_i \sim N(x_i, \sigma_e^2)$ and I want to estimate $\sigma_e$. I see some pairs ($s_i, x_i$) but they are not '...
user20380762's user avatar
1 vote
1 answer
32 views

Generating "surrogate data" to calculate error on estimators

We have a dataset in the form of a time series $Y_n$. We assume it follows an underlying parametric distribution $f(n,\beta)$, $\beta$ being the parameters. From the observed dataset, we get an ...
Barbaud Julien's user avatar
0 votes
1 answer
539 views

Maximum-likelihood estimator for data points with errors

Suppose there are N measurements of a random variable x which has Gaussian p.d.f. with unknown mean $\mu$ and variance $\sigma^2$. Classical textbook solution for estimation $\mu$ and $\sigma$ is to ...
Alexander's user avatar
2 votes
2 answers
682 views

What does the likelihood function converge to when sample size is infinite?

Let $\mathcal{L}(\theta\mid x_1,\ldots,x_n)$ be the likelihood function of parameters $\theta$ given i.i.d. samples $x_i$ with $i=1,\ldots,n$. I know that under some regularity conditions the $\theta$ ...
Tendero's user avatar
  • 956

15 30 50 per page
1
2 3 4 5
8