Recently, a professor at my university made the claim that all optimization problems are convex (after a suitable transformation, if necessary). He is renowned for many important contributions to convex optimization, so I assume that his claim is correct. I would like to understand why this is true and if one can always explicitly construct such a transformation. A particular example that convinced me to believe that the claim is true is moments-based optimization (cf. this paper which you find online via google). The idea is that finite-dimensional polynomial (non-convex) optimization problems are equivalent to infinite-dimensional (convex) optimization problems over measures.
I would like to know to what extend this idea can be generalized. Is this possible for general (non-convex) finite-dimensional nonlinear programs? If so, will the convex counterpart always be infinite-dimensional? And can we also transform non-convex infinite-dimensional optimization problems to convex ones?
PS: I am aware of the fact that the above-mentioned claim would not mean that non-convex optimization problems were suddenly easier to solve. I am interested in this for purely theoretical reasons.
Edit: As said above, Lasserre's paper uses the fact that non-convex polynomial optimization problems are equivalent to convex optimization problems over measures. The (short) proof of this fact, however, makes no use of the polynomial nature of the objective function. I believe this can be proven for any (measurable/continuous?) objective function. This would mean that any finite-dimensional non-convex optimization problem is equivalent to a convex optimization problem over measures. Have I overlooked something?