4
$\begingroup$

I know that for a function $f$ all of its partial derivatives are $0$.

Thus, $\frac{\partial f_i}{\partial x_j} = 0$ for any $i = 1, \dots, m$ and any $j = 1, \dots, n$.

Is there any easy way to prove that $f$ is constant? The results seems obvious but I'm having a hard time expressing it in words explicitly why it's true.

$\endgroup$
3
  • 1
    $\begingroup$ Think of it this way: you can ‘walk’ from any point of $\Bbb R^n$ to any other point along lines parallel to the coordinate axes, and $f$ is constant along those lines. $\endgroup$ Commented Apr 11, 2012 at 22:20
  • 2
    $\begingroup$ This assumes of course that the domain of $f$ (which you have not specified) contains the required paths. $\endgroup$ Commented Apr 11, 2012 at 22:23
  • $\begingroup$ @ChrisEagle : And in particular, it doesn't contain those paths if it's not connected. $\endgroup$ Commented Apr 11, 2012 at 22:24

3 Answers 3

12
$\begingroup$

The result is FALSE in general, even in one dimension. Consider $f:(0,1) \cup (1,2) \to \mathbb{R}$ by $$f(x) = \left\{\begin{array}{lr} 0 & x \in (0,1)\\1 & x \in (1,2)\end{array} \right. $$ We have that $f'(x) = 0$ for all $x \in (0,1) \cup (2,3)$, but $f$ is not constant.

We do, however, have the following theorem which states that $f$ must be constant on the connected components of its domain:

Theorem. Let $U \subseteq \mathbb{R}^N$ be an open set, and suppose that $f:U \to \mathbb{R}$. If for every $x \in U$ and all $i \in \{ 1, \ldots, N\}$ there is $\frac{\partial f}{\partial x_i}(x) = 0$, then $f$ is constant on each connected component of $U$.

There are many proofs, and I will outline one here:

Proof.

Step 1: Prove that for every $x \in U$ there is a neighborhood of $x$ on which $f$ is constant. This result follows immediately from the Mean Value Theorem and the fact that $U$ is open.

Step 2: Let $K$ be a connected component of $U$. We know that $K$ is open since the connected components of an open set in $\mathbb{R}^N$ are open. Choose $x_0 \in K$, and define $V := f^{-1}(f(x_0))\cap K$. By step 1 we have that $V$ is open. Since $f$ is constant in a neighborhood of every point, it follows that $f$ is continuous. Hence $W:=f^{-1}(\mathbb{R}\setminus\{f(x_0)\}) \cap K$ is open (since $K$ is open). We note that $V \cup W = K$, and $V \cap W = \emptyset$. Since $x_0 \in V$ it follows that $V \neq \emptyset$. Hence we must have that $W = \emptyset$, otherwise $(V,W)$ would be a separation of $K$ which would be a contradiction since $K$ is connected. It follows that $K=V$ so that $f(x)=f(x_0)$ for all $x \in K$. $\blacksquare$

$\endgroup$
1
  • $\begingroup$ For step $1$, we need to use the Mean Value Theorem in $\mathbb R^n$. The following site is a reference: mathonline.wikidot.com/… $\endgroup$
    – Sam Wong
    Commented Oct 18, 2018 at 7:51
7
$\begingroup$

We assume that the function $f$ is everywhere defined and that the partial derivatives are identically $0$. Fix $i$. Because the partial derivative of $f_i$ with respect to $x_j$ is identically equal to $0$, the function $f_i$ does not vary as $x_j$ varies. This is true for all the $x_j$, so the function value is independent of the values of all the $x_j$.

But we took it for granted that if the (partial) derivative with respect to $x_j$ is $0$, the function value is independent of the value of $x_j$. We need to show this. It is really a one-variable problem. We want to show that if $g'(x)$ is identically $0$, then $g(x)$ does not depend on $x$.

This fact follows from the (one variable) Mean Value Theorem. By the MVT, for any $a$ and $b$ with $a \ne b$, there is a $c$ between $a$ and $b$ such that $$g(b)=g(a)+(b-a)g'(c).$$ But $g'(c)=0$, and therefore $g(a)=g(b)$.

$\endgroup$
4
  • $\begingroup$ How are you defining b-a, given that a,b are in $\mathbb R^n$? Moreover, g(b),g(a) may also be vector-valued. $\endgroup$
    – Jay K
    Commented Apr 11, 2012 at 22:50
  • $\begingroup$ Please feel free to delete my "answer", my apologies, I still don't fully get the protocol. Actually, if I cannot comment, please delete my comment. My apologies again. I wish one were given a few posts-worth-of-leeway and a chance to learn the protocol before receiving negative votes. I do not intend to sabotage, I just really did not know this was not allowed. $\endgroup$
    – Jay K
    Commented Apr 11, 2012 at 23:13
  • $\begingroup$ @Jay: It's no problem, we understand that there are a lot of rules and mechanisms to get used to as a new user of the site. Please don't take the downvote personally. I hope you continue to participate here; as your reputation score increases, you will gain abilities (see the explanation here). $\endgroup$ Commented Apr 12, 2012 at 2:03
  • $\begingroup$ Surely to get a tangent hyperplane we need stronger assumptions, say that $f$ is differentiable, rather than just partially so? $\endgroup$ Commented Apr 12, 2012 at 7:21
0
$\begingroup$

It's not hard to see that given $\textbf{p,q}\in\mathbb{R}^M$:

$\hspace{2cm}\textbf{f(p)}-\textbf{f(q)}=\textbf{f}\bigl(f_1(\textbf{p})-f_1(\textbf{q}), f_2(\textbf{p})-f_2(\textbf{q}), ... f_N(\textbf{p})-f_N(\textbf{q})\bigr)$

$\hspace{4.5cm} =\bigl(\int_{\gamma} \nabla f_1(\mathbf{r})\cdot d\mathbf{r}, \int_{\gamma} \nabla f_2(\mathbf{r})\cdot d\mathbf{r}, ... \int_{\gamma} \nabla f_N(\mathbf{r})\cdot d\mathbf{r}\bigr)$

$\hspace{4.5cm}=0$

Since according to the Gradient Theorem, given any curve $\gamma$ with end points $\textbf{p},\textbf{q} \in \mathbb{R}^N$, we have:

$\hspace{6cm} f_i\left(\mathbf{p}\right)-f_i\left(\mathbf{q}\right) = \int_{\gamma} \nabla f_i(\mathbf{r})\cdot d\mathbf{r} $

and $\nabla f_i = \frac{\partial f_i}{\partial x_1 }\mathbf{e}_1 + \cdots + \frac{\partial f_i}{\partial x_N }\mathbf{e}_N=0$ for all $i$, by assumption.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .