In Pattern Recognition and Machine Learning Ch 1.6, the author derives the distribution which maximises the differential entropy;
$$H(\textbf{x})-\int p(\textbf{x}) \ln (p(\textbf{x})) d\textbf{x}$$
To do so the author comes up with three constraints;
$$\int_{-\infty}^{\infty} p(x) dx = 1$$ $$\int_{-\infty}^{\infty} xp(x) dx = \mu$$ $$\int_{-\infty}^{\infty} (x-\mu)^2p(x) dx = \sigma^2$$
This results in the Lagrangian functional;
$$F(p)=-\int_{-\infty}^{\infty} p(x) \ln(p(x)) dx + \lambda_1(\int_{-\infty}^{\infty} p(x) dx - 1) + \lambda_2 (\int_{-\infty}^{\infty} x p(x) dx - \mu) + \lambda_3(\int_{-\infty}^{\infty} (x-\mu)^2 p(x) dx - \sigma^2)$$
Taking the derivative of this functional using the calculus of variations and setting it equal to zero gives;
$$p(x)=\exp(-1+\lambda_1+\lambda_2 x + \lambda_3 (x-\mu)^2)$$
The author states that you can find the Lagrange multipliers by back substitution of this result into the three constraint equations, leading to the conclusion that $p(x)$ is a normal density.
I'm wondering how to derive this last step, specifically how to find the Lagrange multipliers. If we substitute back into the constraints we get three integral equations with three unknowns. How would I go about solving these equations?