20
$\begingroup$

I'm supposed to calculate the MLE's for $a$ and $b$ from a random sample of $(X_1,...,X_n)$ drawn from a uniform distribution on $[a,b]$. But the likelihood function, $\mathcal{L}(a,b)=\frac{1}{(b-a)^n}$ is constant, how do I find a maximum? Would appreciate tips on how to proceed!

$\endgroup$
3
  • 3
    $\begingroup$ The likelihood function (which is a function of $a$ and $b$) does not seem constant to me. $\endgroup$
    – JiK
    Commented Sep 1, 2014 at 7:55
  • $\begingroup$ Keep in mind that you should maximize $\frac{1}{(b-a)^n}$, subject to $a \leq X_i \leq b$ for all $i$. $\endgroup$
    – Math-fun
    Commented Jan 20, 2015 at 9:27
  • $\begingroup$ math.stackexchange.com/questions/233778/…. $\endgroup$ Commented Oct 5, 2018 at 18:17

4 Answers 4

17
$\begingroup$

First, $ a\leq \min(X_1 , \ldots , X_n) $ and $ b\geq \max(X_1 , \ldots , X_n) $

That is because otherwise we wouldn't be able to have the samples $ X_i $ which are less than $ a $ or greater than $ b $ because the distribution is

$$ X_i \sim \operatorname{Unif}(a,b) $$

and the minimum value $ X_i $ can have is $ a $, and the maximum value $ X_i $ can have is $ b $.

The likelihood function is

$$ \mathcal{L}(a,b)= \prod_{i=1}^n f(x_i;a,b) = \prod_{i=1}^n \frac{1}{(b-a)} = \frac{1}{(b-a)^n} $$

Consider the log-likelihood function

$$ \log\mathcal{L}(a,b) = \log{\displaystyle \prod_{i=1}^{n} f(x_i;a,b)} = \displaystyle \log\prod_{i=1}^{n} \frac{1}{(b-a)} = \log{\big((b-a)^{-n}\big)} = -n \cdot \log{(b-a)} $$

Note that we are looking for the arguments $a$ and $b$ that maximizes the likelihood (or the log-likelihood)

Now, to find $ \hat{a}_{MLE} $ and $ \hat{b}_{MLE} $ take the log-likelihood function derivatives with respect to $ a $ and $ b $

$$ \frac{\partial}{\partial a} \log\mathcal{L}(a,b) = \frac{n}{(b-a)} \\ \frac{\partial}{\partial b} \log \mathcal{L}(a,b) = -\frac{n}{(b-a)} $$

We can see that the derivative with respect to $ a $ is monotonically increasing, So we take the largest $ a $ possible which is $$ \hat{a}_{MLE}=\min(X_1 , ... , X_n) $$

We can also see that the derivative with respect to $ b $ is monotonically decreasing, so we take the smallest $ b $ possible which is $$ \hat{b}_{MLE}=\max(X_1 , ... , X_n) $$

$\endgroup$
3
  • 1
    $\begingroup$ Formatting tip: use \max,\min,\log they give proper spacing and such like this $\log,\min,\max$. $\endgroup$
    – kingW3
    Commented Jul 2, 2017 at 14:06
  • 4
    $\begingroup$ Why take logarithms here? And why use derivatives? The function $(a,b)\mapsto \dfrac 1 {(b-a)^n}$ clearly increases as $a$ and $b$ get closer together; therefore the solution is simply to put them as close together as the constraints allow. The constraints are only that $a$ must not exceed the smallest observation and $b$ must not be less than the largest. I think this answer is more complicated than it needs to be. $\endgroup$ Commented Oct 5, 2018 at 18:41
  • 2
    $\begingroup$ Not only complicated but using derivatives here is potentially misleading as the likelihood is not differentiable at $(a,b)=(\min x_i,\max x_i)$. $\endgroup$ Commented May 23, 2019 at 19:21
5
$\begingroup$

The likelihood is simply the probability of observing the data under given parametric assumptions. Here: $P(x\in [a,b])=\frac{1}{b-a} \implies \mathcal{L}(a,b;n)=\frac{\prod\limits_{i=1}^n \mathbf{1}_{[a,b]}(x_i)}{(b-a)^n}$, the key to this is the numerator..most people forget this and then wonder why we don't set $a=b$. Thus, to maximize the likelihood, you need to minimize the value $(b-a)$ subject to having all data contained in $[a,b]$. Thus, you want $a=\min x_i$ and $b=\max x_i$

$\endgroup$
5
$\begingroup$

Think about it a bit. If $b$ is less than the maximum of the observations, then the likelihood is $0$. Similarly, if $a$ is greater than the minimum of the observations, then the likelihood is also $0$ (since you have observations lying outside $[a,b]$ which is probability $0$). Then, if you make $b$ bigger than the max or $a$ smaller than the min, the denominator of the likelihood gets bigger (since the difference of $a$ and $b$ clearly gets bigger), so the likelihood is necessarily lower than $b=\max_i X_i$ and $a = \min_i X_i$.

$\endgroup$
1
  • $\begingroup$ well I think there is a typo here. should not $b$ be greater than the maximum observations for the likelihood to be 0? Similarly, $a$ should be less than minimum? If $b$ is less than maximum and $a$ is greater than minimum, they might still be in the range. $\endgroup$
    – ARAT
    Commented Oct 9, 2019 at 9:34
2
$\begingroup$

Hint: Look at the endpoints of your interval for a maximum. For a Uniform-distribution $x$ is only defined for $a<x<b$. Can you take it from here?


Also look here: maximum estimator method more known as MLE of a uniform distribution

Only difference with the link provided is that you are asked to find two MLE's, one for the beginpoint and one for the endpoint of the interval.

$\endgroup$
7
  • $\begingroup$ Hmm - an estimate is supposed to be a function of my data, (X_1, ..., X_n) - so I could estimate $\hat{a} = \min_{1\le i \le n} X_i$ and $\hat{b}=\max_{1\le i \le n} X_i$ - is this correct? $\endgroup$ Commented Jun 4, 2013 at 17:07
  • $\begingroup$ Ah, just looked at the link and it seems to be the same thing, although I'm not familiar with the notion of 'order statistics'. $\endgroup$ Commented Jun 4, 2013 at 17:07
  • $\begingroup$ The $kth$-order statistic (en.wikipedia.org/wiki/Order_statistic) is just the $kth$-smallest value of the $X_i$'s (from your sample). So for instance $X_1$, the first order statistic is the smallest value of the $X_i$'s. Follow the same approach as in the link that I provided, so first make a likelihood function and then derive that with respect to $a$. See if the function is decreasing or increasing in $a$. Then decide if you hence need the maximum or the minimum of the $X_i$'s (the smallest or largest order statistic). Then do the same for $b$. $\endgroup$
    – dreamer
    Commented Jun 4, 2013 at 17:13
  • $\begingroup$ Your answer is not yet correct, but you are getting close. If you follow the steps correctly it should be easy to figure out the correct answer. If you need an explicit solution, let me know. $\endgroup$
    – dreamer
    Commented Jun 4, 2013 at 17:15
  • $\begingroup$ I'm confused now - is likelihood function wrong? I don't understand the thing with order statistics, where the bounds on the likelihood function go. I looked here : ocw.mit.edu/courses/mathematics/… and it seems they do take the maximum as their estimator (p. 14) but their likelihood function includes conditions ($L=0 \mbox{ if } \theta \le \max(...)$ and $L=\theta ^{-n} \mbox{ if } \theta \ge \max(...)$) - which I don't know where they came from... $\endgroup$ Commented Jun 5, 2013 at 8:28

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .