How can I show that the variance of local polynomial regression is increasing with the degree of the polynomial (Exercise 6.3 in Elements of Statistical Learning, second edition)?
This question has been asked before but the answer just states it follows easliy.
More precisely, we consider $y_{i}=f(x_{i})+\epsilon_{i}$ with $\epsilon_{i}$ being independent with standard deviation $\sigma.$
The estimator is given by
$$ \hat{f}(x_{0})=\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right) $$ for $\alpha,\beta_{1},\dots,\beta_{d}$ solving the following weighted least squares problem $$ \min\left(y_{d}-\underbrace{\left(\begin{array}{ccccc} 1 & x_{1} & x_{1}^{2} & \dots & x_{1}^{d}\\ \vdots\\ 1 & & & & x_{n}^{d} \end{array}\right)}_{X}\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right)^{t}W\left(y-\left(\begin{array}{ccccc} 1 & x_{1} & x_{1}^{2} & \dots & x_{1}^{d}\\ \vdots\\ 1 & & & & x_{n}^{d} \end{array}\right)\left(\begin{array}{c} \alpha\\ \beta_{1}\\ \vdots\\ \beta_{d} \end{array}\right)\right) $$ for $W=\text{diag}\left(K(x_{0},x_{i})\right)_{i=1\dots n}$ with $K$ being the regression kernel. The solution to the weighted least squares problem can be written as $$ \left(\begin{array}{cccc} \alpha & \beta_{1} & \dots & \beta_{d}\end{array}\right)=\left(X^{t}WX\right)^{-1}X^{t}WY. $$ Thus, for $l(x_{0})=\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W$ we obtain $$ \hat{f}(x_{0})=l(x_{0})Y $$ implying that $$ \text{Var }\hat{f}(x_{0})=\sigma^{2}\left\Vert l(x_{0})\right\Vert ^{2}=\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)\left(X^{t}WX\right)^{-1}X^{t}W^{2}X\left(X^{t}WX\right)^{-1}\left(\begin{array}{ccccc} 1 & x_{0} & x_{0}^{2} & \dots & x_{0}^{d}\end{array}\right)^{t}. $$ My approach: An induction using the formula for the inverse of a block matrix but I did not succeed.
The paper Multivariate Locally Weighted Least Squares Regression by D. Ruppert and M. P. Wand derives an asymptotic expression for the variance for $n\rightarrow\infty$ in Theorem 4.1 but it is not clear that is increasing in the degree.