I am self-studying Heinz H. Bauschke and Walaa M. Moursi's "An Introduction to Convexity, Optimization, and Algorithms". I came across the first proposition in this book and I got so frustrated because even though I self-studied linear algebra from Gilbert Strang's course, I had never seen this algorithm before and the book provides no further reference nor any explanation for why is it true. The proposition is as follows:
Assume $A, B \in \mathbb{R}^{n \times n}$ are positive semidefinite. Suppose that there exists $0 \leq \beta $ such that $A \leq \beta B$. Then the smallest $0 \leq \gamma$ such that $A \leq \gamma B$ can be found as follows:
1- Find an orthonormal basis of eigenvectors $u_1,...\ , u_d$ corresponding to all nonzero eigenvalues of $B$. Assemble these in a matrix $U \in \mathbb{R}^{n \times d}$. If d = 0, then return $\gamma = 0$.
2- Obtain the Cholesky decomposition of $U^TBU = LL^T$.
3- Return $\gamma = \lambda_{max}(L^{-1}U^TAUL^{-T})$.
Can you please guide me on why this is true? Also, can you suggest any references that I can study that give me the background needed for such problems?
Disclaimer: I tried to find my question anywhere else but wasn't successful, so if it's a duplicate, give me a comment and I will remove it.