Skip to main content

Questions tagged [fisher-information]

The Fisher information measures the curvature of the log-likelihood and can be used to assess the efficiency of estimators.

2 votes
2 answers
47 views

Confusion over Fisher-scoring algorithm

Given a probability model $f(X;\theta)$ and a set of i.i.d. observations $x_1,\ldots,x_n$ which we assume to be drawn from some true parameter $f(X; \theta_0)$, we can perform maximum-likelihood ...
shem's user avatar
  • 166
6 votes
3 answers
449 views

Derivative of the Score Function in Fisher Information

I'm studying Fisher Information and am trying to develop an intuitive understanding. Keep in mind I only have bachelor level mathematics background so I would appreciate an answer that is more ...
Ryan's user avatar
  • 63
0 votes
0 answers
13 views

Specific Question about Deriving the Fisher Information for a Complex Multivariate Normal Distribution [duplicate]

I am starting with the following form for the likelihood function for a complex multivariate normal distribution for data with dimension $d$ and mean $\boldsymbol \mu$: $$ p(\mathbf x|\boldsymbol \...
StackMonkey's user avatar
0 votes
0 answers
13 views

Connection between mean update in CMA-ES and gradient of expected fitness

I currently learn about black-box optimization and CMA-ES. Now, I try to understand some of the theoretical foundations of it. The update of the mean in classic CMA-ES is as follows: $$m \leftarrow m +...
HansDoe's user avatar
1 vote
1 answer
28 views

Expected value of derivative derivative squared log equivalent to regularity condition

I'm given the following information (*): $\mathbb E[(\frac{\partial}{\partial \theta} log f(X_1, ..., X_n; \theta))^2] = - \mathbb E[(\frac{\partial^2}{\partial^2 \theta} log f(X_1, ..., X_n; \theta))$...
JohnD's user avatar
  • 117
1 vote
1 answer
34 views

Why can we get better asymptotic global estimators even for IID random variables?

Let $X_1,...,X_N$ be IID random variables sampled from a parametrised distribution $p_\theta$, and suppose my goal is to retrieve $\theta$ from these samples. We know that the MLE provides an ...
glS's user avatar
  • 383
1 vote
0 answers
43 views

How does reparametrization of the Fisher information matrix change the variance expression for the sufficient statistics?

If I have an exponential family distribution of the form $$p_{\theta}(x) = e^{\theta^T\cdot t(x) - \psi(\theta)},$$ where $\theta$ is a vector of parameters, $t(x)$ is a vector of sufficient ...
absolutelyzeroEQ's user avatar
0 votes
0 answers
30 views

Bootstrap method with 2 Fisher matrices in order to do the cross-correlations between both

I have 2 Fisher matrices where each colum/row represents the information (in Fisher's sense) of astrophysical parameters. These parameters are in the same order for both matrices. Now, I would like to ...
foutou_10's user avatar
1 vote
1 answer
67 views

What am I doing wrong when finding the Fisher information of a binomial distribution with $n=2$?

I am trying to find the Fisher information of a binomial distribution where $n=2$ and $p=\theta$. I have the log-likelihood function as $$n\ln2 + \sum^{n}_{i=1}x_i\ln \theta + (2n-\sum^{n}_{i=1}x_i)(...
Peverel Shipley's user avatar
3 votes
0 answers
89 views

Can Fisher Information Matrix be calculated numerically through finite differentiation?

In GLMs (generalized linear models), the negative of the Fisher information matrix takes the form of a cross product between covariates $\mathbf{X}$ and a diagonal matrix: $$ \mathbf{X}^T \mathbf{D} \...
anymous.asker's user avatar
0 votes
0 answers
28 views

Fisher information in Laplace approximation

Let $X$ and $Y$ be continuous random variables with Probability Density Function (PDF) $f_X$ and $f_Y$, respectively. Upon observing $Y=y$, the log-posterior PDF is given by Bayes' rule in log form: $$...
W. Zhu's user avatar
  • 135
0 votes
0 answers
33 views

Looking for an intuitive explanation of D-Criterion for Optimal Design Problem

I know only a little about Fisher information and optimal experimental design, but I'm trying to better understand the subject. If I have an experiment composed of a single detector and my detector ...
David G.'s user avatar
  • 139
3 votes
1 answer
47 views

Cramér-Rao / Wolfowitz bound with nuisance parameter

Let $F$ be a distribution with two parameters, $\theta$ and $\phi$, whose values are non-random but unknown. Consider a sampling procedure in which $N$ samples $x_1, \ldots x_N$ are obtained from i.i....
Luis Mendo's user avatar
  • 1,099
0 votes
0 answers
59 views

Why is Fisher information the Precision of MLE rather than Covariance? [duplicate]

I'm confused because if FIM is $I(\theta)=Var_x(s(\theta|x))=Var_x({d \over d \theta} log(L(\theta|x)))$ (variance of the score) and the MLE estimates are $\theta^*=dt * \sum^\infty_{t=0}{d\over d\...
profPlum's user avatar
  • 359
1 vote
0 answers
42 views

Derive Cramer-Rao lower bound for $Var(\hat{\theta})$ given that $\mathbb{E}[\hat{\theta}U]=1$

I am trying to derive the Cramer-Rao lower bound for $Var(\hat{\theta})$ given that we already know $\mathbb{E}[U]=0$, $Var(U)=I(\theta)$ and $\mathbb{E}[\hat{\theta}U]=1$. I am struggling with using ...
Lucas's user avatar
  • 11

15 30 50 per page
1
2 3 4 5
23