Skip to main content
Search type Search syntax
Tags [tag]
Exact "words here"
Author user:1234
user:me (yours)
Score score:3 (3+)
score:0 (none)
Answers answers:3 (3+)
answers:0 (none)
isaccepted:yes
hasaccepted:no
inquestion:1234
Views views:250
Code code:"if (foo != bar)"
Sections title:apples
body:"apples oranges"
URL url:"*.example.com"
Saves in:saves
Status closed:yes
duplicate:no
migrated:no
wiki:no
Types is:question
is:answer
Exclude -[tag]
-apples
For more details on advanced search visit our help page
Results tagged with
Search options answers only not deleted user 397125

For questions that use the method of maximum likelihood for estimating the parameters of a statistical model with given data.

1 vote

Injective function of an MLE is an MLE

It seems like you are using your intuition for what it takes to maximize the function $g.$ But that has nothing to do with maximizing the likelihood function $L(\theta;x)$. They are completely differe …
spaceisdarkgreen's user avatar
0 votes
Accepted

Maximum likelihood estimator(1)

Remember the likelihood function is a function of $a.$ The likelihood function on data $x_1,\ldots x_n$ is $$L(a) = \left\{\begin{array}{ll}\left(\frac{53}{2a^{53}}\right)^n(x_1x_2\ldots x_n)^{52} & a …
spaceisdarkgreen's user avatar
1 vote
Accepted

ML estimation help with Poisson-like data

Per our discussion in the comments, MLE does not seem like a pragmatic choice for estimating these parameters. Because it works using the whole probability distribution, the MLE must hone in on the fa …
spaceisdarkgreen's user avatar
1 vote

Maximum likelihood- and a posteriori reasoning

I think you've done what's intended. The 'data' is that you picked a silver and the 'hypotheses' are that the candy is a nougat / a licorice (though this is a questionable use of 'hypothesis' in my op …
spaceisdarkgreen's user avatar
1 vote

Maximum Likelihood Estimate for an Unknown Distribution

Here the CDF is the thing you are estimating. You can think of its values as an infinite number of parameters (in a constrained space that says they need to comprise a right-continuous, nondecreasing …
spaceisdarkgreen's user avatar
3 votes

Maximum likelihood estimator for uniform distribution $U(-\theta, 0)$

Your reasoning for the $U(0,\theta)$ case is wrong, so is interfering with the $U(-\theta,0)$ case. In the $U0,\theta)$ case the likelihood (which is a function of $\theta$ !) is $\frac{1}{\theta^n}$ …
spaceisdarkgreen's user avatar
2 votes
Accepted

Maximum Likelihood Estimate With Factorial

The likelihood function you're maximizing is a function of $\theta$ so the $k!$ is just a multiplicative constant. It has no effect on what value of $\theta$ maximizes the function. (One way to see th …
spaceisdarkgreen's user avatar
1 vote

MLE - Likelihood function has no maximum

The density function is not $\frac{2x}{\alpha^2}.$ It is $\frac{2x}{\alpha^2}$ for $0<x<\alpha$ and zero otherwise. This means that the joint density for $n$ variables is zero if any of the $x_i$ are …
spaceisdarkgreen's user avatar
2 votes

Maximum likelihood estimation problem

The likelihood on this data is $1/n$ if $n\ge 100$ and zero otherwise (since a draw of $100$ is impossible if $n<100$). This is maximized for $n=100,$ so the MLE is $100.$
spaceisdarkgreen's user avatar
1 vote
Accepted

Probability distribution and likelihood are same?

If I'm interpreting you correctly you are given $20$ independent samples that are Poisson-distributed with mean $\theta t$ (where $t$ is the same for each one - representing the duration of some inter …
spaceisdarkgreen's user avatar
3 votes

Minimum mean squared error of uniform distribution

HINT Do the mirror image problem of $U(0,\theta)$ and find what $\rho$ makes the estimator $\rho x_{(n)}$ have lowest expected mean squared error. The answer to this question will be the same $\rho$ …
spaceisdarkgreen's user avatar
2 votes
Accepted

Estimation of coefficients in linear regression

We have $$\begin{eqnarray}\sum_{i=1}^n(x_i-\bar x)(y_i-\bar y) &=&\sum_{i=1}^n x_iy_i- \bar x\sum_{i=1}^ny_i - \bar{y}\sum_{i=1}^nx_i + \bar x\bar y \sum_{i=1}^n (1) \\&=& \sum_{i=1}^nx_iy_i-n\bar x\b …
spaceisdarkgreen's user avatar
2 votes
Accepted

MLE for normal distribution

The second derivative needs to be less than zero at $\theta^*,$ so the first term of the prefactor is zero and what’s left, $-n$, is clearly negative. On a side note, it’s easier here, and a good id …
spaceisdarkgreen's user avatar
0 votes
Accepted

What is the point of the maximum likelihood estimator?

In short, MLE is one way of fitting a model to data. This is the real life application, and it is in fact used in a lot of real-life fitting procedures in practice. (For instance, it is one standard w …
spaceisdarkgreen's user avatar
3 votes
Accepted

EM algorithm for Exponential random variables

The EM algorithm has the two steps. First you find the expected log likelihood (where the expectation is taken under the current parameters and conditional on whatever data you can see) and then you a …
spaceisdarkgreen's user avatar

15 30 50 per page