59
$\begingroup$

I'm taking a machine learning course. The professor has a model for linear regression. Where $h_\theta$ is the hypothesis (proposed model. linear regression, in this case), $J(\theta_1)$ is the cost function, $m$ is the number of elements in the training set, $x^{(i)}$ and $y^{(i)}$ are the variables of the training set element at $i$

$$h_\theta = \theta_1x$$

$$J(\theta_1) = \frac{1}{2m} \sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)})^2$$

What I don't understand is why he is dividing the sum by $2m$.

$\endgroup$

5 Answers 5

60
$\begingroup$

The $\frac{1}{m}$ is to "average" the squared error over the number of components so that the number of components doesn't affect the function (see John's answer).

So now the question is why there is an extra $\frac{1}{2}$. In short, it doesn't matter. The solution that minimizes $J$ as you have written it will also minimize $2J=\frac{1}{m} \sum_i (h(x_i)-y_i)^2$. The latter function, $2J$, may seem more "natural," but the factor of $2$ does not matter when optimizing.

The only reason some authors like to include it is because when you take the derivative with respect to $x$, the $2$ goes away.

$\endgroup$
4
  • 1
    $\begingroup$ Then why include it at all? $\endgroup$
    – Daniel
    Commented Aug 1, 2014 at 17:59
  • $\begingroup$ @DantheMan See the last sentence of my answer. After taking the derivative, the $2$ won't appear anymore, and since most of the computation is with the derivative, it saves some clutter. $\endgroup$
    – angryavian
    Commented Aug 1, 2014 at 18:01
  • $\begingroup$ Ah, I understand now. I understood something else. Thank you. $\endgroup$
    – Daniel
    Commented Aug 1, 2014 at 18:04
  • 2
    $\begingroup$ I can't prove this, but I believe you, it makes derivative calculation easier, which we do in gradient descent for example. $\endgroup$
    – mskw
    Commented Sep 29, 2017 at 14:48
15
$\begingroup$

I assume that the $\frac{1}{m}$ component is obvious and therefore I will focus on the $\frac{1}{2}$ part. I personally doubt that so many authors would decide to include this confusing term just achieve a little bit simpler gradient formulas. Note that there are ways of finding the solution to the linear regression equations that doesn't involve gradients. I will provide another explanation.

When we try to evaluate the machine learning models we assume that our observations are not fully accurate but rather contain some kind of error. For example, imagine measuring a length using some low quality ruler. One of the simplest assumptions would be that we introduce some Gaussian error:

$$ \epsilon \thicksim \mathcal{N}(0, 1) $$

Those parameters are usually safe because we perform some kind of data normalization anyway. We can now compute a probability that our prediction $\hat{y}$ equals our target value $y$ up to this measurement error:

$$ \hat{y} + \epsilon = y $$

We can treat $\hat{y} + \epsilon$ as a new random variable $\widetilde{y} \sim \mathcal{N}(\hat{y}, 1)$. We have just added a constant $\hat{y}$ to our zero-centered random variable $\epsilon$. This random variable $\widetilde{y}$ is our probabilistic estimation of the observation. Instead of stating that for given input $x$ we will observe the output $\hat{y}$ (which would not be true due to the errors) we state that we will most probably observe something around $\hat{y}$. We can compute the likelihood of actually observing the $\hat{y}$ or $y$ as well as any other number using the Gaussian PDF:

$$ p(x) = \frac{1}{{\sigma \sqrt {2\pi } }}exp\left({{\frac{ - \left( {x - \mu } \right)^2 }{2\sigma^2}}}\right) \\ $$

In our case $\mu = \hat{y}$ and $\sigma = 1$:

$$ p(y) = \frac{1}{{\sqrt {2\pi } }}exp\left({{\frac{ - \left( {y - \hat{y} } \right)^2 }{2}}}\right) \\ $$

Note that this is the function that we would actually like to maximize - the probability of observing the true value $y$ given our model. Since our main goal is maximization we can apply a monotone function like logarithm and ignore the constants.

$$ log~p(y) = \frac{ - \left( {y - \hat{y} } \right)^2 }{2} + const $$

Once we get rid of the constant and the minus sign we obtain the squared error term for a single example in our dataset. We can average over all of the examples to get the MSE formula.

$$ MSE(y, \hat{y}) = \frac{1}{2m}\sum_i^m (y - \hat{y})^2 $$

Note that we can similarly derive the formula for the logistic regression loss, i.e. cross-entropy or log-loss.

$\endgroup$
1
  • $\begingroup$ a bit far fetched but clever and makes sense $\endgroup$ Commented Aug 31, 2019 at 14:17
12
$\begingroup$

Dividing by $2m$ ensures that the cost function doesn't depend on the number of elements in the training set. This allows a better comparison across models.

$\endgroup$
8
  • 2
    $\begingroup$ How does that make it non-dependent? It looks like the 2m is just dividing the average up even more. like a pie dividing by 6, now we divide the pie by 2x6=12. Pratically, the average halved. $\endgroup$
    – mskw
    Commented Sep 29, 2017 at 14:47
  • 2
    $\begingroup$ @mskw The accepted answer explains better where the $2$ comes from. Actually, the entire answer is better than mine was! $\endgroup$
    – John
    Commented Sep 29, 2017 at 15:32
  • 1
    $\begingroup$ "This allows a better comparison across models" - Thanks, this is the answer I was looking for. $\endgroup$
    – Amnon
    Commented Mar 21, 2018 at 17:46
  • 1
    $\begingroup$ @Amnon please explain on how "This allows a better comparison across models", I don't seem to get it $\endgroup$ Commented Jul 25, 2019 at 14:54
  • 2
    $\begingroup$ @AnkitAgrawal Some models you may have $100$ elements in the training set, others $1000$. Without dividing by something proportional to $m$ the cost function would be ten times larger for the larger training set. Dividing removes the effect of the size of the training set. $\endgroup$
    – John
    Commented Jul 25, 2019 at 15:03
3
$\begingroup$

I wondered about the exact same thing when taking this course, and ended up researching this a bit. I'll give a short answer here, but you can read a more detailed overview in a blog post I wrote about it.

I believe that at least part of the reason for those scaling coefficients is that L² regularization probably entered the field of deep learning through the introduction of the related, but not identical, concept of weight decay.

The 0.5 factor is then there to get a nice λ-only coefficient for the weight decay in the gradient, and the scaling by m... well, there are at least 5 different motivations that I have found or came up with:

  1. A side-effect of batch gradient descent: When a single iteration of gradient descent is instead formalized over the entire training set, resulting in the algorithm sometimes called batch gradient descent, the scaling factor of 1/m, introduced to make the cost function comparable across different size datasets, gets automatically applied to the weight decay term.
  2. Rescale to the weight of a single example: See grez's interesting intuition.
  3. Training set representativeness: It makes sense to scale down regularization as the size of the training set grows, as statistically, its representativeness of the overall distribution also grows. Basically, the more data we have, the less regularization is needed.
  4. Making λ comparable: By hopefully mitigating the need to change λ when m changes, this scaling makes λ itself comparable across different size datasets. This make λ a more representative estimator of the actual degree of regularization required by a specific model on a specific learning problem.
  5. Empirical value: The great notebook by grez demonstrates that this improves performance in practice.
$\endgroup$
1
$\begingroup$

This is how they explain at Coursera (https://www.coursera.org/learn/machine-learning/supplement/nhzyF/cost-function)

The mean is halved (1/2) as a convenience for the computation of the gradient descent, as the derivative term of the square function will cancel out the 1/2 term.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .