Consider the simple linear regression model$$y=\beta_0+\beta_1X+e,$$ We observe a sample of n sets of observations $(x_i,y_i)(i=1,2,\cdots,n)$, then we can write $$y_i=\beta_0+\beta_1x_i+e_i,$$ where $$e_i \sim N(0,\sigma^2),i.i.d$$ Using the least squares method, we obtain the estimator of $\beta_0$ and $\beta_1$and define them as $b_0$ and $b_1$.
i.e. $$ b_1=\frac{S_{xy}}{S_{xx}} = \frac{\sum_{i=1}^n(\bar{x}-x_i)(\bar{y}-y_i)}{\sum_{i=1}^n(\bar{x}-x_i)^2}\\ b_0=\bar{y}-b_1\bar{x},$$ with $\bar{x}=\sum_{i=1}^nx_i$ and $\bar{y}=\sum_{i=1}^ny_i$.
The residual sum of squares of the model can be defined as $$RSS=\sum_{i=1}^n \hat{e}^2=\sum_{i=1}^n(y_i-(b_0+b_1x_i))^2$$
In this case, why are the residual sum of squares and $b_1$(or $b_0$) independently distributed?
When I read about the confidence interval estimation of the simple linear regression model when $\sigma^2$ is unknown(So we use the t-statistic), I met this problem.