I am trying to confirm my understanding of how to apply the [Log-Sum-Exp trick to recover a posterior distribution from a log-posterior distribution. I want to consider a simple example from a model I am working on, and try to recover the posterior from the log-posterior using the trick. Given the example normalized posterior $$p(x) = (0.39898) \text{exp}\left[-\frac{1}{8}(-2.5+x)^2\right] \sin ^2(x)$$ and the corresponding log-posterior $$\log(p(x))=-\log(2)+\log\left[\frac{\text{exp}\left\{-\frac{1}{8}(-2.5+x)^2\right\}}{2\sqrt{2 \pi}}\right]+2\log(\sin (x)).$$
For the Log-Sum-Exp trick I now consider defining $c:= \text{max}~{\log(p(x))} = -2.40119$ (determined numerically with Mathematica).
The log-sum-exp trick then implies that we can recover the posterior by writing $$p(x) = \text{exp}\big(\log(p(x)) - c-\log \int_x \text{exp}(\log(p(x)-c )dx\big).$$
Does this seem like the correct application of the trick? When I evaluate this example I numerically obtain $p(x)\text{exp}(1.38639)$. I am not sure why I obtain this extra factor of $\text{exp}(1.38639)$. Since the posterior is normalized would you not expect $\text{exp}(0)$ as opposed to $\text{exp}(1.38639)$? I guess it does not extend to the continuous case directly. How would one adapt the discrete case accordingly for the continuous case for this type of problem? Are there any standard extensions to the continuous case of the Log-Sum-Exp trick? Thanks.