106
$\begingroup$

The three-dimensional Laplacian can be defined as $$\nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}.$$ Expressed in spherical coordinates, it does not have such a nice form. But I could define a different operator (let's call it a "Laspherian") which would simply be the following:

$$\bigcirc^2=\frac{\partial^2}{\partial \rho^2}+\frac{\partial^2}{\partial \theta^2}+\frac{\partial^2}{\partial \phi^2}.$$

This looks nice in spherical coordinates, but if I tried to express the Laspherian in Cartesian coordinates, it would be messier.

Mathematically, both operators seem perfectly valid to me. But there are so many equations in physics that use the Laplacian, yet none that use the Laspherian. So why does nature like Cartesian coordinates so much better?

Or has my understanding of this gone totally wrong?

$\endgroup$
15
  • 71
    $\begingroup$ your laspherian is not dimensionally consistent $\endgroup$
    – wcc
    Commented Apr 26, 2019 at 15:25
  • 5
    $\begingroup$ That's true but: the Laplacian wouldn't be dimensionally consistent either except we happen to have given x,y, and z all the same units. We could equally well give the same units to $\rho$, $\theta$, and $\phi$. I think @knzhou's answer of rotational symmetry justifies why, at least in our universe, we only do the former. I've never made that connection before, though! $\endgroup$
    – Sam Jaques
    Commented Apr 26, 2019 at 15:35
  • 34
    $\begingroup$ You can't give the same units to distance and angle. $\endgroup$ Commented Apr 26, 2019 at 17:43
  • 11
    $\begingroup$ @SamJaques Your original question is good, but the above comment comes off as you being stubborn. You are asking what is more confusing about a convention where angles and distance have the same units than a system where they have different units? Come on, man. $\endgroup$ Commented Apr 26, 2019 at 17:48
  • 13
    $\begingroup$ "Mathematically, both operators seem perfectly valid to me." Mathematically, it's perfectly valid for gravity to disappear every Tuesday or for the electric force to drop off linearly with distance. Most things that are mathematically valid are not the way the universe works. $\endgroup$
    – Owen
    Commented Apr 27, 2019 at 23:57

4 Answers 4

193
+50
$\begingroup$

Nature appears to be rotationally symmetric, favoring no particular direction. The Laplacian is the only translationally-invariant second-order differential operator obeying this property. Your "Laspherian" instead depends on the choice of polar axis used to define the spherical coordinates, as well as the choice of origin.

Now, at first glance the Laplacian seems to depend on the choice of $x$, $y$, and $z$ axes, but it actually doesn't. To see this, consider switching to a different set of axes, with associated coordinates $x'$, $y'$, and $z'$. If they are related by $$\mathbf{x} = R \mathbf{x}'$$ where $R$ is a rotation matrix, then the derivative with respect to $\mathbf{x}'$ is, by the chain rule, $$\frac{\partial}{\partial \mathbf{x}'} = \frac{\partial \mathbf{x}}{\partial \mathbf{x}'} \frac{\partial}{\partial \mathbf{x}} = R \frac{\partial}{\partial \mathbf{x}}.$$ The Laplacian in the primed coordinates is $$\nabla'^2 = \left( \frac{\partial}{\partial \mathbf{x}'} \right) \cdot \left( \frac{\partial}{\partial \mathbf{x}'} \right) = \left(R \frac{\partial}{\partial \mathbf{x}} \right) \cdot \left(R \frac{\partial}{\partial \mathbf{x}} \right) = \frac{\partial}{\partial \mathbf{x}} \cdot (R^T R) \frac{\partial}{\partial \mathbf{x}} = \left( \frac{\partial}{\partial \mathbf{x}} \right) \cdot \left( \frac{\partial}{\partial \mathbf{x}} \right)$$ since $R^T R = I$ for rotation matrices, and hence is equal to the Laplacian in the original Cartesian coordinates.

To make the rotational symmetry more manifest, you could alternatively define the Laplacian of a function $f$ in terms of the deviation of that function $f$ from the average value of $f$ on a small sphere centered around each point. That is, the Laplacian measures concavity in a rotationally invariant way. This is derived in an elegant coordinate-free manner here.

The Laplacian looks nice in Cartesian coordinates because the coordinate axes are straight and orthogonal, and hence measure volumes straightforwardly: the volume element is $dV = dx dy dz$ without any extra factors. This can be seen from the general expression for the Laplacian, $$\nabla^2 f = \frac{1}{\sqrt{g}} \partial_i\left(\sqrt{g}\, \partial^i f\right)$$ where $g$ is the determinant of the metric tensor. The Laplacian only takes the simple form $\partial_i \partial^i f$ when $g$ is constant.


Given all this, you might still wonder why the Laplacian is so common. It's simply because there are so few ways to write down partial differential equations that are low-order in time derivatives (required by Newton's second law, or at a deeper level, because Lagrangian mechanics is otherwise pathological), low-order in spatial derivatives, linear, translationally invariant, time invariant, and rotationally symmetric. There are essentially only five possibilities: the heat/diffusion, wave, Laplace, Schrodinger, and Klein-Gordon equations, and all of them involve the Laplacian.

The paucity of options leads one to imagine an "underlying unity" of nature, which Feynman explains in similar terms:

Is it possible that this is the clue? That the thing which is common to all the phenomena is the space, the framework into which the physics is put? As long as things are reasonably smooth in space, then the important things that will be involved will be the rates of change of quantities with position in space. That is why we always get an equation with a gradient. The derivatives must appear in the form of a gradient or a divergence; because the laws of physics are independent of direction, they must be expressible in vector form. The equations of electrostatics are the simplest vector equations that one can get which involve only the spatial derivatives of quantities. Any other simple problem—or simplification of a complicated problem—must look like electrostatics. What is common to all our problems is that they involve space and that we have imitated what is actually a complicated phenomenon by a simple differential equation.

At a deeper level, the reason for the linearity and the low-order spatial derivatives is that in both cases, higher-order terms will generically become less important at long distances. This reasoning is radically generalized by the Wilsonian renormalization group, one of the most important tools in physics today. Using it, one can show that even rotational symmetry can emerge from a non-rotationally symmetric underlying space, such as a crystal lattice. One can even use it to argue the uniqueness of entire theories, as done by Feynman for electromagnetism.

$\endgroup$
11
  • $\begingroup$ In other words, the Cartesian form of the Laplacian is nice because the Cartesian metric tensor is nice. $\endgroup$ Commented Apr 26, 2019 at 15:42
  • 1
    $\begingroup$ I think it's also probably valid to talk about the structure of spacetime; it is Lorentzian and in local inertial frames it always looks like Minkowski space. So if we were to ignore the time coordinates and just consider the spatial components of spacetime then the structure always possesses Riemann geometry and appears Euclidean in a local inertial frame. Cartesian coordinates are then the most natural way to simply describe Euclidean geometry, which is why the Laplacian appears the way it does. Nature favours the Laplacian because space appears Euclidean in local inertial frames. $\endgroup$
    – Ollie113
    Commented Apr 26, 2019 at 15:53
  • 1
    $\begingroup$ Are you drawing a distinction between the heat/diffusion and Schrödinger equations because the latter contains terms depending on the fields themselves, rather than just their derivatives? (And similarly for "wave" vs. "Klein-Gordon"?) Or is there another reason that you're differentiating between cases that have the same differential operators in them? $\endgroup$ Commented Apr 28, 2019 at 19:24
  • 2
    $\begingroup$ The third block-set equation makes explicit use of the notion that an inner product is taken between a space and its dual, but the notation associated with that idea appears halfway through as if out of nowhere. It might be better to include the ${}^T$ in the first two dot products as well. $\endgroup$ Commented Apr 28, 2019 at 19:46
  • 1
    $\begingroup$ yes, please explain where that $^T$ suddenly comes from. Just give us the general public some names we could search for. $\endgroup$
    – Will Ness
    Commented Apr 29, 2019 at 5:36
30
$\begingroup$

This is a question that hunted me for years, so I'll share with you my view about the Laplace equation, which is the most elemental equation you can write with the laplacian.

If you force the Laplacian of some quantity to 0, you are writing a differential equation that says "let's take the average value of the surrounding". It's easier to see in cartesian coordinates:

$$\nabla ^2 u = \frac{\partial^2 u}{\partial x ^2} + \frac{\partial^2 u}{\partial y ^2} $$

If you approximate the partial derivatives by

$$ \frac{\partial f}{\partial x }(x) \approx \frac{f(x + \frac{\Delta x}{2}) - f(x-\frac{\Delta x}{2})}{\Delta x} $$ $$ \frac{\partial^2 f}{\partial x^2 }(x) \approx \frac{ \frac{\partial f}{\partial x } \left( x+ \frac{\Delta x}{2} \right) - \frac{\partial f}{\partial x } \left( x - \frac{\Delta x}{2} \right) } { \Delta x} = \frac{ f(x + \Delta x) - 2 \cdot f(x) + f(x - \Delta x) } { \Delta x ^2 } $$

for simplicity let's take $\Delta x = \Delta y = \delta$, then the Laplace equation $$\nabla ^2 u =0 $$ becomes: $$ \nabla ^2 u (x, y) \approx \frac{ u(x + \delta, y) - 2 u(x, y) + u(x - \delta, y) } { \delta ^2 } + \frac{ u(x, y+ \delta) - 2 u(x, y) + u(x, y - \delta) } { \delta ^2 } = 0 $$

so

$$ \frac{ u(x + \delta, y) - 2 u(x, y) + u(x - \delta, y) + u(x, y+ \delta) - 2 u(x, y) + u(x, y - \delta) } { \delta ^2 } = 0 $$

from which you can solve for $u(x, y)$ to obtain $$ u(x, y) = \frac{ u(x + \delta, y) + u(x - \delta, y) + u(x, y+ \delta)+ u(x, y - \delta) } { 4 } $$

That can be read as: "The function/field/force/etc. at a point takes the average value of the function/field/force/etc. evaluated at either side of that point along each coordinate axis."

Laplace equation function

Of course this only works for very small $\delta$ for the relevant sizes of the problem at hand, but I think it does a good intuition job.

I think what this tell us about nature is that at first sight and at a local scale, everything is an average. But this may also tell us about how we humans model nature, being our first model always: "take the average value", and maybe later dwelling into more intricate or detailed models.

$\endgroup$
10
  • 1
    $\begingroup$ Out of curiosity, is that (very nice) figure a scan of a hand sketch, or do you have a software tool that supports such nice work? $\endgroup$ Commented Apr 28, 2019 at 19:43
  • 2
    $\begingroup$ Your nice idea that the potential u(x,y) is the average of it's surroundings is exactly the way a spreadsheet (like Excel) is used to solve the Poisson equation for electrostatic problems that are 2-dimensional like a long metal pipe perpendicular to the spreadsheet. Each cell is programmed equal to the average of it's surrounding 4 cells. Fixed numbers (=voltage) are then put into any boundary or interior cells that are held at fixed potentials. The spreadsheet is then iterated until the numbers stop changing at the accuracy you are interested in. $\endgroup$ Commented Apr 28, 2019 at 20:59
  • 1
    $\begingroup$ @dmckee thank you for the compliment! I wish it was a software tool but it's my hand. Graphing software draws very nice rendered 3d graphics but I have yet to find one that draws in a more organic way. If you know some that does please recommend! $\endgroup$ Commented Apr 29, 2019 at 1:14
  • $\begingroup$ I've been experimenting with a Wacom tablet from time to time. But I cheaped out and bought the USD200 one instead of the USD1000 one that is also a high-resolution display. And the result is that I'm having to do a lot of art-school style exercises again to learn to draw on one surface while looking at another and in the mean time I'm just not able to do some of the more sophisticated things I would like to do. But the pressure sensitivity is very nice. If you have the funds the pro version might be a better investment. $\endgroup$ Commented Apr 29, 2019 at 3:45
  • $\begingroup$ The numerical technique @GaryGodfrey mentioned is an example of a relaxation method. You can learn more about it from Per Brinch Hansen's report on "Numerical Solution of Laplace's Equation" (surface.syr.edu/eecs_techreports/168), and from many other places too. $\endgroup$
    – Vectornaut
    Commented Apr 29, 2019 at 15:36
24
$\begingroup$

For me as a mathematician, the reason why Laplacians (yes, there is a plethora of notions of Laplacians) are ubiquitous in physics is not any symmetry of space. Laplacians also appear naturally when we discuss physical field theories on geometries other than Euclidean space.

I would say, the importance of Laplacians is due to the following reasons:

(i) the potential energy of many physical systems can be modeled (up to errors of third order) by the Dirichlet energy $E(u)$ of a function $u$ that describes the state of the system.

(ii) critical points of $E$, that is functions $u$ with $DE(u) = 0$, correspond to static solutions and

(iii) the Laplacian is essentially the $L^2$-gradient of the Dirichlet energy.

To make the last statement precise, let $(M,g)$ be a compact Riemannian manifold with volume density $\mathrm{vol}$. As an example, you may think of $M \subset \mathbb{R}^3$ being a bounded domain (with sufficiently smooth boundary) and of $\mathrm{vol}$ as the standard Euclidean way of integration. Important: The domain is allowed to be nonsymmetric.

Then the Dirichlet energy of a (sufficiently differentiable) function $u \colon M \to \mathbb{R}$ is given by

$$E(u) = \frac{1}{2}\int_M \langle \mathrm{grad} (u), \mathrm{grad} (u)\rangle \, \mathrm{vol}.$$

Let $v \colon M \to \mathbb{R}$ be a further (sufficiently differentiable) function. Then the derivative of $E$ in direction of $v$ is given by

$$DE(u)\,v = \int_M \langle \mathrm{grad}(u), \mathrm{grad}(v) \rangle \, \mathrm{vol}.$$

Integration by parts leads to

$$\begin{aligned}DE(u)\,v &= \int_{\partial M} \langle \mathrm{grad}(u), N\rangle \, v \, \mathrm{vol}_{\partial M}- \int_M \langle \mathrm{div} (\mathrm{grad}(u)), v \rangle \, \mathrm{vol} \\ &= \int_{\partial M} \langle \mathrm{grad}(u), N \rangle \, v \, \mathrm{vol}_{\partial M}- \int_M g( \Delta u, v ) \, \mathrm{vol}, \end{aligned}$$

where $N$ denotes the unit outward normal of $M$.

Usually one has to take certain boundary conditions on $u$ into account. The so-called Dirichlet boundary conditions are easiest to discuss. Suppose we want to minimize $E(u)$ subject to $u|_{\partial M} = u_0$. Then any allowed variation (a so-called infinitesimal displacement) $v$ of $u$ has to satisfy $v_{\partial M} = 0$. That means if $u$ is a minimizer of our optimization problem, then it has to satisfy

$$ 0 = DE(u) \, v = - \int_M g( \Delta u, v ) \, \mathrm{vol} \quad \text{for all smooth $v \colon M \to \mathbb{R}$ with $v_{\partial M} = 0$.}$$

By the fundamental lemma of calculus of variations, this leads to the Poisson equation

$$ \left\{\begin{array}{rcll} - \Delta u &= &0, &\text{in the interior of $M$,}\\ u_{\partial M} &= &u_0. \end{array}\right.$$

Notice that this did not require the choice of any coordinates, making these entities and computations covariant in the Einsteinian sense.

This argumentation can also be generalized to more general (vector-valued, tensor-valued, spinor-valued, or whatever-you-like-valued) fields $u$. Actually, this can also be generalized to Lorentzian manifolds $(M,g)$ (where the metric $g$ has signature $(\pm , \mp,\dotsc, \mp)$); then $E(u)$ coincides with the action of the system, critical points of $E$ correspond to dynamic solutions, and the resulting Laplacian of $g$ coincides with the wave operator (or d'Alembert operator) $\square$.

$\endgroup$
2
  • 1
    $\begingroup$ Bit late but I think this has knzhou's answer hidden in it: How is the inner product of gradients defined? You're taking the usual inner product on $\mathbb{R}^3$, right? So I can be pedantic and ask: why not take a different inner product? Rotation and translation invariance seems to be still be the answer. $\endgroup$
    – Sam Jaques
    Commented May 4, 2020 at 11:11
  • $\begingroup$ Well, if you weaken "rotation-invariance" to "isotropy" (rotation-invariance per tangent space) and abandon the translation invariance, I am with you. My point is that a general (pseudo-)Riemannian manifold need not have any global isometries. But the Laplacians/d'Alembert operator is still well-defined. $\endgroup$ Commented May 4, 2020 at 11:56
18
$\begingroup$

The expression you've given for the Laplacian, $$ \nabla^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}, $$ is a valid way to express it, but it is not a particularly useful definition for that object. Instead, a much more useful way to see the Laplacian is to define it as $$ \nabla^2 f = \nabla \cdot(\nabla f), $$ i.e., as the divergence of the gradient, where:

  • The gradient of a scalar function $f$ is the vector $\nabla f$ which points in the direction of fastest ascent, and whose magnitude is the rate of growth of $f$ in that direction; this vector can be cleanly characterized by requiring that if $\boldsymbol{\gamma}:\mathbb R \to E^3$ is a curve in Euclidean space $E^3$, the rate of change of $f$ along $\boldsymbol\gamma$ be given by $$ \frac{\mathrm d}{\mathrm dt}f(\boldsymbol{\gamma}(t)) = \frac{\mathrm d\boldsymbol{\gamma}}{\mathrm dt} \cdot \nabla f(\boldsymbol{\gamma}(t)). $$

  • The divergence of a vector field $\mathbf A$ is the scalar $\nabla \cdot \mathbf A$ which characterizes how much $\mathbf A$ 'flows out of' an infinitesimal volume around the point in question. More explicitly, the divergence at a point $\mathbf r$ is defined as the normalized flux out of a ball $B_\epsilon(\mathbf r)$ of radius $\epsilon$ centered at $\mathbf r$, in the limit where $\epsilon \to 0^+$, i.e. as $$ \nabla \cdot \mathbf A(\mathbf r) = \lim_{\epsilon\to0^+} \frac{1}{\mathrm{vol}(B_\epsilon(\mathbf r)} \iint_{\partial B_\epsilon(\mathbf r))} \mathbf A \cdot \mathrm d \mathbf S. $$

Note that both of these definitions are completely independent of the coordinate system in use, which also means that they are invariant under translations and under rotations. It just so happens that $\nabla^2$ happens to coincide with $\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2},$ but that is a happy coincidence: the Laplacian occurs naturally in multiple places because of its translational and rotational invariance, and that then implies that the form $\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}+\frac{\partial^2}{\partial z^2}$ happens frequently. But that's just hanging on from the properties of the initial definition.

$\endgroup$
2
  • $\begingroup$ It makes sense to me why a gradient defined in that way would be simpler for Cartesian coordinates, since they also form a basis in the strict sense of a vector space, which spherical coordinates don't. In the definition you gave, normalization/units are sneaking in, I think: The dot product implies the units must be the same to be added together. Which is weird because the left-hand side definition, $\frac{d}{dt}f(\gamma(t))$, doesn't seem to use units at all. But: The derivative of $f$ can't be defined without a metric on $E^3$, and the metric sneaks in the necessary normalization. $\endgroup$
    – Sam Jaques
    Commented Apr 29, 2019 at 7:37
  • $\begingroup$ An attempted summary of your answer: The Laplacian looks nice with Cartesian coordinates because they play nice with the $L^2$ norm, and we want that because real-life distance uses the $L^2$ norm. $\endgroup$
    – Sam Jaques
    Commented Apr 29, 2019 at 7:39

Not the answer you're looking for? Browse other questions tagged or ask your own question.