33
$\begingroup$

Exercise 3.32 page 119 of Convex Optimization is concerned with the proof that if $f:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto f(x)$ and $g:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto g(x)$ are both convex, nondecreasing (or nonincreasing) and positive, then $h:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto h(x)=f(x)g(x)$ is also convex.

Are there any generalisations or analogies of this claim to the case where both $f$ and $g$ are convex functions mapping elements from $\mathbb{R}^n$ to $\mathbb{R}$, for any $n>1$?

$\endgroup$
3
  • 2
    $\begingroup$ Why is it necessary that $f,g$ be monotonous? $\endgroup$
    – a06e
    Commented Sep 26, 2018 at 21:20
  • $\begingroup$ @becko The statement is not true if we drop monotonicity. For instance, let $f=(|x|+10^{-6})_{x\in\mathbb R},g=(e^{-x})_{x\in\mathbb R};$ then $fg=((|x|+10^{-6})\cdot e^{-x})_{x\in\mathbb R}$ is not convex. (For instance, $f(0)=10^{-6},f(1)=1.000001/e,f(2)=2.000002/e^2<1.000001/e$ since $2/e<1,$ so $f(0),f(2)<f(1),$ so $f$ is not quasiconvex and hence is not convex.) $\endgroup$ Commented Oct 16, 2020 at 0:17
  • $\begingroup$ See also Exercise 10(g), page 204, in Stromberg's An Introduction to Classical Real Analysis (AMS, 2015), or Exercise 10(g), page 204, in Stromberg's Introduction to Classical Real Analysis (Wadsworth, 1981). This book has lots of exercises, by the way. $\endgroup$ Commented Oct 16, 2020 at 0:22

1 Answer 1

33
$\begingroup$

Yes a generalization is possible. Here is an elementary approach to the convexity of the product of two nonnegative convex functions defined on a convex domain of $\mathbb{R}^n$.

Choose $x$ and $y$ in the domain and $t$ in $[0,1]$. Your aim is to prove that $\Delta\ge0$ with $$\Delta=t(fg)(x)+(1-t)(fg)(y)-(fg)(tx+(1-t)y). $$ But $f$ and $g$ are nonnegative and convex, hence $$ (fg)(tx+(1-t)y)\le(tf(x)+(1-t)f(y))(tg(x)+(1-t)g(y)). $$ Using this and some easy algebraic manipulations, one sees that $\Delta\ge t(1-t)D(x,y)$ with $$ D(x,y)=f(x)g(x)+f(y)g(y)-f(x)g(y)-f(y)g(x), $$ that is, $$ D(x,y)=(f(x)-f(y))(g(x)-g(y)). $$ This proves a generalization of the result you quoted to any product of convex nonnegative functions $f$ and $g$ such that $D(x,y)\ge0$ for every $x$ and $y$ in the domain of $f$ and $g$.

In particular, if $f$ and $g$ are differentiable at a point $x$ in the domain, one asks that their gradients $\nabla f(x)$ and $\nabla g(x)$ are such that $z^*M(x)z\ge0$ for every $n\times 1$ vector $z$, where $M(x)$ is the $n\times n$ matrix $$ M(x)=\nabla f(x)\cdot(\nabla g(x))^*. $$

$\endgroup$
3
  • $\begingroup$ If one were willing to consider smooth functions, the derivative version of this statement may make the mechanism easier to understand (depends on one's point of view, of course). $\endgroup$ Commented Mar 17, 2011 at 12:13
  • 1
    $\begingroup$ @Willie If $f$ is not differentiable at $x$, replace the gradient of $f$ at $x$ by any subgradient of $f$ at $x$. $\endgroup$
    – Did
    Commented Mar 17, 2011 at 13:13
  • $\begingroup$ Thanks a lot for your answer and all the details! $\endgroup$ Commented Mar 18, 2011 at 9:02

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .