Skip to main content
added 298 characters in body
Source Link
Sextus Empiricus
  • 82.2k
  • 5
  • 113
  • 285

I am not quite sure about the first one. I think I've heard, that we need the constant in the model for it to be true.

See the following example where the residuals clearly correlate with the variable $x$ when we fit a model without constant.

set.seed(1)
x = 0:20
y = 5 + x + rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

example or residuals correlating with x

And I think the third one does not matter for the question.

Consider the covariance (related to the correlation)

$$cov(x,u) = E[xu] - E[x]E[u]$$

The fact that $E[xu] = 0$ does not guarantee zero correlation. In particular if $E[u] \neq 0$ which can happen when there is no intercept.

Orthogonal, as in $x \cdot u = 0$, does not mean uncorrelated, $(x-\bar{x}) \cdot (u - \bar{u}) = 0$

I am not quite sure about the first one. I think I've heard, that we need the constant in the model for it to be true.

See the following example where the residuals clearly correlate with the variable $x$ when we fit a model without constant.

set.seed(1)
x = 0:20
y = 5 + x + rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

example or residuals correlating with x

I am not quite sure about the first one. I think I've heard, that we need the constant in the model for it to be true.

See the following example where the residuals clearly correlate with the variable $x$ when we fit a model without constant.

set.seed(1)
x = 0:20
y = 5 + x + rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

example or residuals correlating with x

And I think the third one does not matter for the question.

Consider the covariance (related to the correlation)

$$cov(x,u) = E[xu] - E[x]E[u]$$

The fact that $E[xu] = 0$ does not guarantee zero correlation. In particular if $E[u] \neq 0$ which can happen when there is no intercept.

Orthogonal, as in $x \cdot u = 0$, does not mean uncorrelated, $(x-\bar{x}) \cdot (u - \bar{u}) = 0$

Rollback to Revision 1
Source Link
Sextus Empiricus
  • 82.2k
  • 5
  • 113
  • 285

I am not quite sure about the first one. I think I've heard, that we need the constant in the model for it to be true.

See the following example where the residuals clearly correlate with the variable $x$ when we fit a model without constant.

set.seed(1)
x = 0:20
y = 5 + x + rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

example or residuals correlating with x

And I think the third one does not matter for the question.

Regarding your third situation $E[xu] = 0$ doesn't mean that we need to have $xu=0$.

For example if we generate the above situation with random data $y\sim N(0,1)$

set.seed(1)
x = 0:20
y = rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

then because of the symmetry we have $E[xu] = 0$ but for a particular specific observation we do not need to have $xu = 0$.

I am not quite sure about the first one. I think I've heard, that we need the constant in the model for it to be true.

See the following example where the residuals clearly correlate with the variable $x$ when we fit a model without constant.

set.seed(1)
x = 0:20
y = 5 + x + rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

example or residuals correlating with x

And I think the third one does not matter for the question.

Regarding your third situation $E[xu] = 0$ doesn't mean that we need to have $xu=0$.

For example if we generate the above situation with random data $y\sim N(0,1)$

set.seed(1)
x = 0:20
y = rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

then because of the symmetry we have $E[xu] = 0$ but for a particular specific observation we do not need to have $xu = 0$.

I am not quite sure about the first one. I think I've heard, that we need the constant in the model for it to be true.

See the following example where the residuals clearly correlate with the variable $x$ when we fit a model without constant.

set.seed(1)
x = 0:20
y = 5 + x + rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

example or residuals correlating with x

added 478 characters in body
Source Link
Sextus Empiricus
  • 82.2k
  • 5
  • 113
  • 285

I am not quite sure about the first one. I think I've heard, that we need the constant in the model for it to be true.

See the following example where the residuals clearly correlate with the variable $x$ when we fit a model without constant.

set.seed(1)
x = 0:20
y = 5 + x + rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

example or residuals correlating with x

And I think the third one does not matter for the question.

Regarding your third situation $E[xu] = 0$ doesn't mean that we need to have $xu=0$.

For example if we generate the above situation with random data $y\sim N(0,1)$

set.seed(1)
x = 0:20
y = rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

then because of the symmetry we have $E[xu] = 0$ but for a particular specific observation we do not need to have $xu = 0$.

I am not quite sure about the first one. I think I've heard, that we need the constant in the model for it to be true.

See the following example where the residuals clearly correlate with the variable $x$ when we fit a model without constant.

set.seed(1)
x = 0:20
y = 5 + x + rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

example or residuals correlating with x

I am not quite sure about the first one. I think I've heard, that we need the constant in the model for it to be true.

See the following example where the residuals clearly correlate with the variable $x$ when we fit a model without constant.

set.seed(1)
x = 0:20
y = 5 + x + rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

example or residuals correlating with x

And I think the third one does not matter for the question.

Regarding your third situation $E[xu] = 0$ doesn't mean that we need to have $xu=0$.

For example if we generate the above situation with random data $y\sim N(0,1)$

set.seed(1)
x = 0:20
y = rnorm(21)
plot(x, y, ylim = c(0,30))
lines(x, predict(lm(y~0+x)))

then because of the symmetry we have $E[xu] = 0$ but for a particular specific observation we do not need to have $xu = 0$.

Bounty Ended with 50 reputation awarded by Marlon Brando
Source Link
Sextus Empiricus
  • 82.2k
  • 5
  • 113
  • 285
Loading