Skip to main content

All Questions

Tagged with
2 votes
0 answers
90 views

Decomposing the prediction of a sum of Gaussian Processes into predictions from each Gaussian Process

Suppose the functions $f_1\sim\mathcal{GP}(m_1,K_1)$ and $f_2\sim\mathcal{GP}(m_2,K_2)$ are drawn from independent Gaussian Processes, and let $$f=f_1+f_2.$$ Then $$f\sim\mathcal{GP}(m,K)$$ where $m=...
FizzleDizzle's user avatar
7 votes
3 answers
2k views

If X=Y+Z, Is it ever useful to regress X on Y?

If we have X and Y that are mathematically dependent: X = Y + Z, is it 'forbidden' to use Y as a predictor to X in linear regression? I'm trying to find a concise explanation for why it is, or isn't. ...
amc____'s user avatar
  • 85
1 vote
1 answer
60 views

Is the sum of 3 bits a linearly separable task?

In other words can a linear classifier learn to correctly assign a class (label 0 to 3) for an input of 3 bits? Intuitively this cannot work, since the half-adder circuit contains an XOR block, which ...
jaaq's user avatar
  • 111
3 votes
1 answer
57 views

Is there a way to prove $\mathbf{\hat{Y}}^T\mathbf{e}=\mathbf{0}$ without resorting to summations?

I would like to show that $\mathbf{\hat{Y}}^T\mathbf{e}=\mathbf{0}$. I can solve this by saying that it is equivalent to showing $\sum e_i\hat{y}_i=0$. However, I'm wondering if there is a way to ...
Ron Snow's user avatar
  • 2,103
1 vote
1 answer
75 views

residuals in the simple regression model

The residuals in the simple regression model have to sum up to 0?
Sarah kenwich's user avatar
1 vote
1 answer
5k views

Linear regression $y_i=\beta_0 + \beta_1x_i + \epsilon_i$ covariance between $\bar{y}$ and $\hat{\beta}_1$

I am currently reading through slides from Georgia Tech on linear regression and came across a section that has confused me. It states for $$ y_i=\beta_0+\beta_1x_i+\epsilon_i $$ where $\epsilon_i \...
strwars's user avatar
  • 367
1 vote
1 answer
70 views

Value of $\sum_{j=1} (y_{j} - \bar{y})$ and proving properties of hat value

The i-th fitted value $\hat{Z}_i$ is written as a linear amalgam of response values $\hat{Z}_i=\sum_{j=1}h_{ij}Z_j$ where $h_{ij}=\frac{1}{n}+\frac{(y_i-\bar{y})(y_j-\bar{y})}{S_{yy}}$ and $S_{yy}=\...
strwars's user avatar
  • 367
2 votes
1 answer
3k views

Notation for leads and lags in difference-in-differences

I was hoping someone could help clarify a notational discrepancy. For example, Lord Pischke uses the following sigma notation in two different lecture notes published on the web, yet refers to the ...
Thomas Bilach's user avatar
0 votes
1 answer
133 views

Proving an identity involving $E(e_i^2)$ in simple OLS

Once expressed the simple OLS residual $e_i$ as a weighted sum of the noise terms: \begin{equation}e_{i}=\sum_{j}\left(\delta_{i j}-\frac{1}{n}-\left(x_{i}-\overline{x}\right) \frac{x_{j}-\overline{x}...
pdb's user avatar
  • 76
6 votes
2 answers
596 views

Sum of predicted values to the power of 10 [closed]

When I take predicted values from a linear model to the power of 10, their sum is always a lot bigger than the original. Is it even allowed to sum, and does anybody have a reference for how this ...
Rasmus Ø. Pedersen's user avatar
0 votes
1 answer
8k views

Regression proof for decomposition of sums of squares [duplicate]

I got as far as distributing the summation across the Left Side so that I have: $$ \sum_i y_i^2 - \sum_i 2 y_i \bar{y} + \sum_i \bar{y}^2 $$ Not sure where to go from there.
Sohail's user avatar
  • 1