6
$\begingroup$

Recently, a professor at my university made the claim that all optimization problems are convex (after a suitable transformation, if necessary). He is renowned for many important contributions to convex optimization, so I assume that his claim is correct. I would like to understand why this is true and if one can always explicitly construct such a transformation. A particular example that convinced me to believe that the claim is true is moments-based optimization (cf. this paper which you find online via google). The idea is that finite-dimensional polynomial (non-convex) optimization problems are equivalent to infinite-dimensional (convex) optimization problems over measures.

I would like to know to what extend this idea can be generalized. Is this possible for general (non-convex) finite-dimensional nonlinear programs? If so, will the convex counterpart always be infinite-dimensional? And can we also transform non-convex infinite-dimensional optimization problems to convex ones?

PS: I am aware of the fact that the above-mentioned claim would not mean that non-convex optimization problems were suddenly easier to solve. I am interested in this for purely theoretical reasons.

Edit: As said above, Lasserre's paper uses the fact that non-convex polynomial optimization problems are equivalent to convex optimization problems over measures. The (short) proof of this fact, however, makes no use of the polynomial nature of the objective function. I believe this can be proven for any (measurable/continuous?) objective function. This would mean that any finite-dimensional non-convex optimization problem is equivalent to a convex optimization problem over measures. Have I overlooked something?

$\endgroup$
9
  • $\begingroup$ Did he maybe say that all convex optimization problems are conic? If not, you should ask him to support his claim. $\endgroup$
    – LinAlg
    Commented Apr 12, 2018 at 18:59
  • $\begingroup$ No, the claim was definitely as above. Moreover, as the case of polynomial optimization shows, it seems to be true at least to some extent. I am asking whether the claim is true in full generality (i.e., for non-polynomial or even infinite-dimensional problems) and, if so, why. $\endgroup$
    – Nukular
    Commented Apr 12, 2018 at 19:54
  • 2
    $\begingroup$ This reminds me of the (true) notion that semidefinite programs can be expressed as semi-infinite linear programs: seas.ucla.edu/~vandenbe/publications/sip.pdf (Lasserre looked at this too, in fact) $\endgroup$ Commented Apr 13, 2018 at 4:29
  • 3
    $\begingroup$ As I said above, I am not interested in practical applications of the above statement. Concerning my question, the key point in Lasserre's paper is that non-convex polynomial optimization problems are equivalent to convex (infinite-dimensional) optimization problems over measures, which is shown at the very beginning. The part where Lasserre truncates the moment sequences to arrive at (sometimes exact) finite-dimensional convex relaxations is irrelevant for the above question. $\endgroup$
    – Nukular
    Commented Apr 13, 2018 at 5:41
  • 1
    $\begingroup$ I'd be curious to hear what your professor meant by this comment. Can you just ask them to explain it and report back to us? Ideally with a link to some reference that explains what point the professor was making. $\endgroup$
    – littleO
    Commented Apr 19, 2018 at 0:10

1 Answer 1

5
$\begingroup$

Suppose that your optimization problem is

$(P1) \quad \min_{x} f(x)$ such that $x \in \Omega \qquad$,

where $f:\mathbb{R}^n \rightarrow \mathbb{R}$ is a continuous real-valued function and $\Omega \subset \mathbb{R}^n$ is a compact set. The assumptions of continuity of $f$ and compactness of $\Omega$ are made to guarantee that a minimum exists (see Weiertrass' theorem).

This generic optimization problem attains the same optimal value of the convex optimization problem

$(P2) \quad \min_{x,\alpha} \alpha$ such that $(x,\alpha) \in \textrm{conv}(\{x\in \Omega, f(x) \le \alpha\})$,

where $\textrm{conv}(S)$ is the convex hull of the set S.

Remark 1: As pointed out in the comments, although (P2) has the same optimal value of (P1), it may be the case that the optimal points are different.

Remark 2: some authors define differently what is a convex optimization problem, precisely,

$\min_x f(x)$ such that $ g_i(x) \le 0, i=1,\ldots,m$ and $h_j(x) = 0, j=1,\ldots,k$, where $f, g_i$ are convex functions and $h_j$ are affine functions.

For more details, see Amir Ali Ahmadi's lecture notes on convex and conic optimization, in particular, pages 13 and 14 of lecture 4 from 2016.

$\endgroup$
20
  • 2
    $\begingroup$ I am very skeptical of the claim of equivalence here. It is far too convenient that any finite-dimensional non-convex optimization problem can be translated to a convex optimization problem simply by applying the standard epigraph transformation? No, I don't buy it. $\endgroup$ Commented Apr 15, 2018 at 19:48
  • 3
    $\begingroup$ Consider a simple example: $f(x)=\sin x$, $\Omega=[-\pi,pi]$. Clearly, the solution to the original problem is $x^*=-\pi/2$, $f(x^*)=-1$. But while the converted problem has the same optimal value of $-1$, it achieves that value on any $x\in\Omega$, because the convex hull reduces to $[-\pi,\pi]\times [-1,+\infty)$. This definitely is not equivalent; for the problems to be equivalent, they would have to obtain the same optimal points. $\endgroup$ Commented Apr 15, 2018 at 19:51
  • 1
    $\begingroup$ @MichaelGrant I believe you have a good point, but in your example, isn't the convex hull given by this region? If this is the case, I think the optimal points (at least in this example) are the same. $\endgroup$
    – shamisen
    Commented Apr 16, 2018 at 0:53
  • 1
    $\begingroup$ @MichaelGrant Yes, I think your comment is false if we are minimising $\sin x$ over $[-\pi,\pi]$. However, the point you are making is true when $\Omega = [-\pi /2 , 3\pi / 2]$. $\endgroup$ Commented Apr 17, 2018 at 12:47
  • 2
    $\begingroup$ I interpret this result as saying the finding the convex hull of an epigraph is at least as hard as nonlinear optimization. $\endgroup$
    – user856
    Commented Apr 19, 2018 at 3:14

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .