5
$\begingroup$

Consider a random sample $X_1, X_2, \dots, X_n$ from the shifted exponential PDF

$$f(x; \lambda, \theta) = \begin{cases}\lambda e^{-\lambda(x-\theta)} ;& x \geq \theta\\ \theta ; &\text{Otherwise}\end{cases}$$

Taking $\theta = 0$ gives the pdf of the exponential distribution considered previously (with positive density to the right of zero).

a. Obtain the maximum likelihood estimators of $\theta$ and $\lambda$.

I followed the basic rules for the MLE and came up with:

$$\lambda = \frac{n}{\sum_{i=1}^n(x_i - \theta)}$$

Should I take $\theta$ out and write it as $-n\theta$ and find $\theta$ in terms of $\lambda$?

$\endgroup$
4
  • $\begingroup$ Please verify that the edit is accurate. Formatting tips here. $\endgroup$
    – Em.
    Commented Jan 28, 2016 at 23:24
  • $\begingroup$ Thank you for the edit. I will use the formatting next time. Sorry for the inconvenience. $\endgroup$
    – Nicklovn
    Commented Jan 28, 2016 at 23:29
  • $\begingroup$ "Should I take $θ$ out and write it as $-nθ$ and find $θ$ in terms of $λ$?" Seems like running in circles, no? At the moment you have one relation between θ and λ, this cannot suffice to determine them both. $\endgroup$
    – Did
    Commented Jan 28, 2016 at 23:50
  • $\begingroup$ It's been a while. Following those "basic rules" is not a universal solution, and even when it works one should try to understand the situation rather than just turning the crank. I've posted a solution showing $\widehat{\,\theta\,} = \min\{x_1,\ldots,x_n\}. \qquad$ $\endgroup$ Commented Oct 17, 2017 at 17:45

2 Answers 2

6
$\begingroup$

The density of a single observation $x_i$ is $$f(x \mid \lambda, \theta) = \lambda e^{-\lambda(x-\theta)} \mathbb{1}(x \ge \theta).$$ The joint density of the entire sample $\boldsymbol x$ is therefore $$\begin{align*} f(\boldsymbol x \mid \lambda, \theta) &= \prod_{i=1}^n f(x_i \mid \lambda, \theta) \\ &= \lambda^n \exp\left(-\sum_{i=1}^n \lambda(x_i - \theta)\right) \mathbb{1}(x_{(1)} \ge \theta) \\ &= \lambda^n \exp\left(-\lambda n (\bar x - \theta)\right) \mathbb{1}(x_{(1)} \ge \theta), \end{align*}$$ where $\bar x$ is the sample mean. Hence the joint log-likelihood for $\lambda, \theta$ is proportional to $$\ell(\lambda, \theta \mid \boldsymbol x) \propto \log \lambda - \lambda(\bar x - \theta) + \log \mathbb{1}(x_{(1)} \ge \theta).$$ The log-likelihood is maximized for a pair of estimators $(\hat \lambda, \hat \theta)$. Because $\lambda > 0$, $\ell$ is an increasing function of $\theta$ until $\theta > x_{(1)} = \min_i x_i$; hence $\ell$ is maximal with respect to $\theta$ when $\theta$ is made as large as possible without exceeding the minimum order statistic; i.e., $\hat \theta = x_{(1)}$. For a given $\theta$, $\ell$ with respect to $\lambda > 0$ is a continuous function, thus we compute the partial derivative $$\frac{\partial \ell}{\partial \lambda} = \frac{1}{\lambda} - (\bar x - \theta),$$ for which the only critical point is $$\lambda = \frac{1}{\bar x - \theta},$$ and we can verify that this choice is a global maximum for $\lambda > 0$. Therefore, our joint maximum likelihood estimator is $$(\hat \lambda, \hat \theta) = \left((\bar x - x_{(1)})^{-1}, x_{(1)}\right).$$ Note that when both $\lambda$ and $\theta$ are unknown parameters, the MLE cannot contain any expressions involving $\lambda$ or $\theta$, as an estimator is always a function of the sample and/or known parameters.

$\endgroup$
2
  • $\begingroup$ you could also have proven $\hat{\theta} = \min_i x_i$ in a first time , and use the MLE only for $\lambda$. by the way $\hat{\theta} = \min_i x_i$ is a biased estimator or not ? $\endgroup$
    – reuns
    Commented Jan 29, 2016 at 0:03
  • 1
    $\begingroup$ @user1952009 It is always a good idea to proceed systematically and generally for pedagogical purposes, since it is possible to have a multi-parameter distribution for which maximizing the MLE requires simultaneous consideration of the parameters. Regarding the bias, that is an exercise for the interested reader to calculate, but it should be intuitively obvious that $\hat \theta$ is biased, since for any given true $\theta$, it is never possible to observe $x_i < \theta$, thus not possible to obtain an MLE estimate that is smaller than $\theta$. $\endgroup$
    – heropup
    Commented Jan 29, 2016 at 0:28
0
$\begingroup$

\begin{align} & L(\lambda,\theta) = \begin{cases} 0 \text{ if } \theta > \min\{x_1,\ldots,x_n\}, \text{ but otherwise as below:} \\[10pt] \displaystyle \lambda^n \exp\left(-\lambda\sum_{i=1}^n (x_i-\theta) \right) = \lambda^n\exp\left( -\lambda n (\overline x - \theta) \right) \text{ where } \overline x = \frac 1 n \sum_{i=1}^n x_i \end{cases} \end{align}

As a function of $\theta,$ this function increases as $\theta$ increases, until $\theta$ gets as big as $\min\{x_1,\ldots,x_n\}.$

Therefore the MLE for $\theta$ is $\min\{x_1,\ldots,x_n\}.$

Then we have $\displaystyle L(\lambda,\min) = \lambda^n \exp\left( -\lambda \sum_{i=1}^n (x_i-\min) \right),$ and so $$ \ell = \log L(\lambda,\min) = n\log\lambda - {}\lambda\sum_{i=1}^n(x_i-\min). $$ $$ \frac{d\ell}{d\lambda} = \frac n \lambda - \sum_{i=1}^n (x_i-\min). $$ This is $0$ when $\lambda = \dfrac n {\sum_{i=1}^n (x_i-\min)}.$

That doesn't prove that there is a global maximum at that point, but the nature of the function makes it clear that a global maximum occurs somewhere, and the derivative has to be $0$ where it occurs, and then we find that there is only one point where the derivative is $0.$ So that's it.

$\endgroup$
2
  • $\begingroup$ Why until $\theta$ gets as big as $\min\{x_1,..., x_n\}$ ? I feel $\theta = max{}$ would give the maximum of L $\endgroup$
    – Archer
    Commented Apr 3, 2020 at 19:25
  • 1
    $\begingroup$ @Archer : The function of $\theta$ is $\lambda^n\exp\left( -\lambda n (\,\overline x - \theta) \right).$ That gets bigger as $\theta$ gets bigger, except that as soon as $\theta$ is bigger than $\min\{x_1,\ldots,x_n\}$ then the function drops to $0,$ because none of the observations can be less than $\theta. \qquad$ $\endgroup$ Commented Apr 4, 2020 at 17:03

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .