My professor gave us this exercise:
Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a differentiable function. Given the following two definitions of convexity of $f$, prove that (i) implies (ii):
(i) $\forall x, y \in \mathbb{R} : f(x) \ge f(y) + f'(y)(x - y)$
(ii) $\forall x, y \in \mathbb{R}, \forall \lambda \in [0, 1] : f(\lambda x + (1 - \lambda)y) \le \lambda f(x) + (1 - \lambda)f(y)$
I think definition (i) is incorrect. I think it would be correct, if we would assume $x,y$ arbitrary but we must have $x \geq y$. Since (i) is differently written:
$$\frac{f(x)-f(y)}{x-y} \ge f'(y)$$
So the gradient/slope from $y$ to $x$ is greater than the slope of $y$. That makes sense because in a convex function the slope increases.
But if we choose $y$ to be greater than $x$, then it's wrong.
Let me give this example:
We have $f(x)=x^2$ and $x,y=\pm 1$. The slope from $x$ to $y$ (or $y$ to $x$) is $0$. It's the black line in the plot. The slope at $-1$ is $-2$, that's the green line. So we indeed have $-2 \le 0$. But if we pick $x$ and $y$ to be $x=-1$ and $y=1$, then the statement (i) is false, since it would state that $0 \ge 2$.
So to me it seems, the correct definition would be:
(i) $\forall x, y \in \mathbb{R}$ with $x \ge y : f(x) \ge f(y) + f'(y)(x - y)$