In "Interpretable Machine Learning: A Guide For Making Black Box Models Explainable", I found the following for Friedman's H-statistic: $$PD_{jk}(x_j, x_k) = PD_j (x_j) + PD_k (x_k),$$ where $PD_{jk}(x_j, x_k)$ is the two way partial dependence function of both features and $PD_j (x_j)$ and $PD_k (x_k)$ the partial dependence functions of the single features. Later, the H-statistic is calculated as follows: $$H_{jk}^2 = \frac{\sum_i [PD_{jk}(x_j^{(i)}, x_k^{(i)}) - PD_j(x_j^{(i)}) - PD_k(x_k^{(i)})]^2}{\sum_i PD_{jk}(x_j^{(i)}, x_k^{(i)})}$$
Wouldn't this equation be always zero, when combined with the upper equation?
Looking at the numerator, my thought process is the following: $$PD_{jk}(x_j^{(i)}, x_k^{(i)}) - PD_j(x_j^{(i)}) - PD_k(x_k^{(i)}) = PD_j (x_j^{(i)}) + PD_k (x_k^{(i)}) - PD_j(x_j^{(i)}) - PD_k(x_k^{(i)}) = 0.$$
The chapter can be found here: https://christophm.github.io/interpretable-ml-book/interaction.html