This question is related to Corollary 3 of the paper: Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces by Kenji Fukumizu, et al.
Basicly they first defined the Covariance operator $\Sigma$ on some infinite-dimension separable Hilbert space $\mathcal{H}$, which is a linear, bounded, nonnegative, self-adjoint and trace-class operator.
At the beginning of Corollary 3, they said "Let $\Sigma^{-1}$ be the right inverse of $\Sigma$ on (Ker $\Sigma)^\bot \subset \mathcal{H}$".
My understanding is that $\Sigma^{-1}:(\text{Ker }\Sigma)^\bot\to (\text{Ker }\Sigma)^\bot$ is defined to be a linear operator such that $\Sigma\Sigma^{-1}f = f$ for any $f\in (\text{Ker }\Sigma)^\bot$.
I'm confused how this right inverse is defined explicitly? Also, since $(\text{Ker }\Sigma)^\bot = \overline{\text{Range }\Sigma}$, if $f\in (\text{Ker }\Sigma)^\bot\setminus \text{Range }\Sigma$, it seems impossible for $\Sigma\Sigma^{-1}f = f$ to hold.
Any help is appreciated!