8
$\begingroup$

A differential $n$-form is defined as a totally antisymmetric $(0,n)$-tensor field on the surface of a manifold. Why must they be antisymmetric?

I understand this has something to do with asking which kinds of tensors can you define a derivative without any more structure.

$\endgroup$
13
  • 4
    $\begingroup$ One point of view here is to think of differential forms as generalizations of the determinant, and the determinant of a matrix is an antisymmetric function of the rows/columns. As for why determinants are antisymmetric, you want redundant rows to give zero volume/determinant. See math.stackexchange.com/questions/1199530/… $\endgroup$ Commented Feb 11, 2022 at 22:04
  • 7
    $\begingroup$ Why they "must" be antisymmetric depends on why you care about them. Definitions are adapted to applications. The bottom line in most interpretations I'd say is that differential forms should be sensitive to orientations and an orientation is reversed by transposing two elements of an ordered basis. Of course, this ultimately also just comes down to differential forms behaving like generalized determinants in some sense. $\endgroup$
    – Thorgott
    Commented Feb 11, 2022 at 22:06
  • 3
    $\begingroup$ @Thorgott: my attitude is that the sensitivity to orientation is an accident, not the deep or motivating idea. The really fundamental geometric idea is that you get zero volume when you have a parallelepiped specified by redundant vectors. That, along with multilinearity, is what leads to sensitivity to orientation. But geometrically, you don’t care so much a priori about orientations. Indeed measures/densities on manifolds can do without orientations altogether, at the price of losing linearity. This is just my geometric perspective. $\endgroup$ Commented Feb 11, 2022 at 22:13
  • 1
    $\begingroup$ Yes, that’s true. And of course multilinearity and sensitivity to orientation do matter crucially for some applications (e.g. fluxes, work), so as you say it is ultimately a matter of what problems you want to solve. I just try to lay emphasis on the geometric meaning of the determinant whenever I can, since it often seems to get lost in the algebra. $\endgroup$ Commented Feb 11, 2022 at 22:53
  • 4
    $\begingroup$ @MoisheKohan That is an unbelievably poor response. All ideas/definitions in mathematics are based on intuitive reasons, or other structural reasons, behind them. Mathematicians did not just invent the definition of continuity out of nowhere, there was idea behind that definition. Likewise, the OP is asking what the idea behind differential forms was, especially having them be anti-symmetric. $\endgroup$ Commented Feb 12, 2022 at 1:19

6 Answers 6

18
$\begingroup$

EDIT: I can't help but give more context to this.

The ultimate goal is to find a way to define an integral on an manifold without using any geometric structure such as a Riemannian metric. Why anyone would think this should be possible is beyond me. But now that we know the answer, we can concoct a story like the following.

If you're going to integrate something over a manifold, you'd better start with the simplest possible situation. The usual way to start is to chop a 2d region into rectangles and do a weighted sum of areas of the rectangles. But that construction is clearly coordinate dependent.

So the initial focus is on defining the area of a rectangle in an abstract vector space $V$ (not $\mathbb{R}^2$ because that implies coordinates). But in $V$ there are no rectangles, only parallelograms.

We therefore want to show that the area of the parallelogram spanned by two vectors in $\mathbb{R}^2$ can be defined in a purely abstract way that uses only the vector space structure and no other assumptions on $V$. To do this, the idea is to use what we know about the area of a parallelogram in Euclidean space but without relying on any formula for it.

Let $A(v,w)$ be the area of the parallelogram spanned by $v, w \in V$. Note that the area function is really well defined only up to a scalar factor, but we'll just work with one of them. We want to derive the properties of $A$ using, say, only pictures and basic geometry.

For convenience, let's draw $v$ horizontally and observe that any vector not parallel to $v$ points either upward or downward.

The picture

enter image description here

shows that if both vectors $w_1, w_2$ both point upward, then $$A(v,w_1+w_2) = A(v,w_1) + A(v,w_2).$$ Using the fact that the area of a parallelogram is base times height, it's easy to see that, if $c > 0$, then $$ A(v,cw) = cA(v,w). $$ This is tantalizingly close to a linear function of $w$. In fact, there is a unique extension of $A$ to be a linear function of $w$, but then the area of the parallelogram is $|A(v,w)|$. You could start with this and define things that can be integrated over a manifold. This leads to the definition of a density. The argument below shows that $A$ still has to be an exterior $2$-tensor, so antisymmetry is always there but you can hide it using the absolute value.

The brilliant insight someone came up with is that it makes sense to omit the absolute value and call $A(v,w)$ the signed area of the parallelogram. Linearity with respect to $w$ implies that the area of the parallelogram is positive if $w$ points upward and negative if it points downward. Antisymmetry now appears, because if $w$ points upward relative to $v$, then $v$ points downward relative to $w$. Therefore $$A(w,v) = - A(v,w)$$ And of course, this also implies that $A(v,w)$ is a linear function of $v$. And therefore, $A$ is an antisymmetric $2$-tensor.

Notice that we never used coordinates, length, or angle in deriving this conclusion. So $A$ is well-defined, up to a nonzero constant factor, with respect to any linear transformation of $V$. Orientation now appears because the constant factor has a sign.

So why use signed area instead of area? The fact that $A$ is algebraically way easier to work with than $|A|$ is strong motivation. However, the deciding factor is Stokes' theorem. At this point, I do use coordinates and consider a rectangle in $\mathbb{R}^2$. The goal is to find a 2d fundamental theorem of calculus. If ultimately you want to make this coordinate independent, then the only possible integral around the boundary is a line integral. If you write the formula for a line integral (of a $1$-form) and apply the 1d fundamental theorem of calculus, the resulting double integral is easily seen to be an antisymmetric 2-tensor. In this one calculation, you see how to define the exterior derivative of a $1$-form and prove Stoke's theorem on a rectangle.

Moreover, if you omit the absolutely value, then you can glue rectangles together and extend Stokes' theorem to manifolds with boundary. That's not possible with densities.

From here, it is straightforward to develop the calculus of differential forms.

$\endgroup$
2
  • 1
    $\begingroup$ +1. My understanding is that this is not just a modern story to tell but the historical story that was told — i.e. exactly how Grassmann argued (but to be sure, I’d have to dig out that tome again…) $\endgroup$ Commented Feb 12, 2022 at 7:55
  • 1
    $\begingroup$ @symplectomorphic, you're right. I'm just too lazy to figure out where this story really came from. There's no way I came up with it first. Also, I have the huge advantage of 20/20 hindsight. $\endgroup$
    – Deane
    Commented Feb 12, 2022 at 15:12
9
$\begingroup$

There is a general theme in mathematics that if you introduce $\pm$ into your definition it often leads to nicer mathematical properties. You first come across this in calculus. You define $\int_a^b f$ to be equal to $-\int_b^a f$ for the simple reason because you are forced to if you want the "substitution rule" to work in general.

When it comes to differential forms they are generalizations of area. Suppose $A(x,y)$ represents the area determined by two vectors $x,y$ in the plane $\mathbb{R}^2$. If you choose to define area using only the positive convention then $A(cx,y) = |c| A(x,y)$. Notice the presence of $|c|$, this now makes the properties of $A(\cdot,\cdot)$ more annoying to deal with.

Furthermore, if we use oriented area, then $A(x,y) = -A(y,x)$, in particular it implies that $A(x,x) = -A(x,x)$, and so $A(x,x) = 0$, which is exactly what it should be since there is no area formed!

And then you can keep on going. The bilinear properties of $A(\cdot,\cdot)$ will simply not work if you forced yourself to use absolute area instead.

$\endgroup$
2
$\begingroup$

A $(0,n)$-tensor $w$ is antisymmetric $\iff$ $w(v_{1},...,v_{n})=0$ when $v_{1},...,v_{n}$ are linearly dependent.

Let $V$ be a vector space. We define a $n$-volume as a $n$-linear mapping $Vol^{n}:V^{n}=V\times...\times V\to\mathbb{R}$ such that if $v_{1},...,v_{n}$ are linearly dependent vectors then $Vol(v_{1},...,v_{n})=0$.

This is dew to a visual argument: in $\mathbb{R}^{3}$, three linearly dependent vectors lie in the same plane, so the parallelepiped they form ocupies a volume of $0$.

Similarly, in $\mathbb{R}^{2}$ two linearly dependent vectors lie in the same line, so the parallelogram they form ocupies an area of $0$. In this case we call it area unstead of 2-volume.

The condition "linearly dependent$\implies$ volume equal $0$" is equivalent to be antisymmetric, so those volumes defined above are exactly the $n$-forms. The equivalence is very easy to check, all comes from the fact that in both cases repeated vectors implies the anhilliation of the tensor.

So a field of $n$-forms $w$ is just a way to measure the volume of $n$ vector fields, giving you the function that measures its volume in each point of the tangent space of the manifold.

$\endgroup$
0
1
$\begingroup$

Differential forms can be used to define orientation on smooth manifolds, and it highly relies on being antisymmetric. To be more precise, first notice that if you have a vectorspace $V$ of dimension $n$, then the map $\det:V^n\rightarrow \mathbb{R}$ is an antisymmetric $n$-linear function (make a matrix out of vectors by putting them next to each other, then compute the determinant). You can check that any $n$-linear antysimmetric map is just some constant times $\text{det}$. Now suppose that you have a smooth manifold $M$ of dimension $n$, and you have a nowhere zero differential $n$-form $\omega$. Then at each $p\in M$, $\omega$ allows you to choose orientations for $T_pM$ and this orientation "depends smoothly on $p$" and this orientation will be compatible on the whole manifold. More precisely a smooth manifold M is orientable (meaning you can choose an atlas such that the transition functions are orientation preserving) if and only if $M$ admits a nowhere zero differential $n$-form.

$\endgroup$
1
$\begingroup$

$ \newcommand\form[1]{\langle#1\rangle} \newcommand\K{\mathbb K} \newcommand\R{\mathbb R} \newcommand\linspan{\mathrm{span}} \newcommand\Extsym{{\textstyle\bigwedge}} \newcommand\Ext{\mathop\Extsym} \newcommand\Blades[1]{\mathop{\Extsym^{\!(k)}}} \newcommand\MVects[1]{\mathop{\Extsym^{\!k}}} \newcommand\dd{\mathrm d} \newcommand\lintr{\mathbin{{\lrcorner}}} $

Here is a perspective I do not quite see represented. To begin, we consider an $n$-dimensional real vector space $V$, though a lot of this generalizes to other fields.

What we wish to do now is find an algebraic way to represent subspaces of $V$. Given a hypothetical object $X$ representing a subspace $[X] \subseteq V$, we want a product $\wedge$ such that $$ [X] = \{v \in V \;:\; v\wedge X = 0\}. $$ But we'd also like $v \in V$ to represent itself; hence the axiom $$ v\wedge v = 0. $$ For $[X]$ to be a subspace, $v\wedge X$ ought to be linear in $v$. Given the desire $$ v\wedge X = 0 \iff X\wedge v = 0 $$ we assume bilinearity of $\wedge$. We also will suppose that $[v\wedge v'] = \linspan\{v, v'\}$; then for $u, v, w \in V$ $$ u\wedge(v\wedge w) = 0 \iff u = av + bw \iff bw = u - av \implies (u\wedge v)\wedge(bw) = 0. $$ If $b = 0$ then $(u\wedge v)\wedge w = 0$ since $u\wedge v = 0$. If $b \not= 0$, then we still get $(u\wedge v)\wedge w = 0$. A similar argument in the reverse direction then shows $$ u\wedge(v\wedge w) = 0 \iff (u\wedge v)\wedge w = 0. $$ We deem it reasonable to assume that $\wedge$ is associative on vectors. We have arrived at the following assumptions on $\wedge$:

  • Bilinearity,
  • Associativity,
  • $\forall v \in V.\, v\wedge v = 0$,

and it turns out that these make the conditions $$ v\wedge X = 0 \iff X\wedge v = 0,\quad [v\wedge v'] = \linspan\{v, v'\} $$ redundant. We are lead directly to the exterior algebra $\Ext V$, and we find that $$ [v_1\wedge v_2\wedge\cdots\wedge v_k] = \linspan\{v_1,v_2,\dotsc,v_k\} $$ so we need look no further. (Note though that $[0] = V$.) We may thus represent $k$-subspaces $[X] \subseteq V$ with elements $X \in \Blades kV$ where $$ \Blades kV = \{v_1\wedge\cdots\wedge v_k \;:\; v_1,\dotsc,v_k \in V\}. $$ We will call elements of $\Blades kV$ $k$-blades. We also define $\MVects kV$ to be the set of all sums of $k$-blades, and call its elements grade-$k$ multivectors, or just $k$-vectors. The exterior algebra is a direct sum of grades: $$ \Ext V = \bigoplus_{k=0}^n\MVects kV. $$


Let $M$ be a differentiable manifold. A submanifold $S \subseteq M$ has at each point $x \in M$ a unique tangent subspace $S_x \subseteq T_xM$. Let us realize each $S_x$ as a blade in $\Ext V$. We define a quilted $k$-surface as a $k$-dimensional submanifold $S \subseteq M$ together with a map $M \to \Blades kT_xM$ written $x \mapsto S_x$ such that $S_x = 0$ iff $x \not\in S$ and such that $[S_x] = T_xS \subseteq T_xM$.

An integral $I$ takes a submanifold $S$ and returns a scalar $I(S)$. Heuristically, such an integral should assign a weight $w_x$ to each $x \in S$, from which $$ I(S) = \sum_{x\in S}w_x\epsilon, $$ where $\epsilon$ is an infinitesimal $k$-volume. When $S$ is a quilted $k$-surface, we deem it reasonable that $w_x = I_x(S_x)$ where $I_x : \Blades kT_xM \to \K$.

Integrating over two surfaces $S, S'$ independently should yield the sum of their integrals. Define the formal sum of two quilted surfaces as a multi-surface where: $$ S + S' = S\cup S',\quad (S+S')_x = S_x + S'_x. $$ A quilted surface is also defined to be a multi-surface; define the sum of two general multi-surfaces analogously. Then it stands to reason that $$ I_x(S_x + S'_x) = I_x(S_x) + I_x(S'_x). $$

The scaling of $S$ by $a \in \K$ should result in its tangent spaces being scaled: $(aS)_x = aS_x$. However, we could also achieve a scaling of $S$ by scaling the $k$-volume $\epsilon \mapsto a\epsilon$. (Both of these viewpoints are also reasonable when $a$ includes a change in orientation.) It follows that $$ \epsilon I_x(aS_x) = a\epsilon I_x(S_x) \implies I_x(aS_x) = aI_x(S_x). $$

So each $I_x$ is a linear form on grade-$k$ multivectors; since the exterior algebra is a direct sum of grades, we can let each $I_x$ be a linear form on $\Ext T_xM$, i.e. $I_x \in (\Ext T_xM)^*$. (This can be thought of as just bundling together integrals for submanifolds of all dimensions.)

Our goal is now to describe $(\Ext V)^*$ for a vector space $V$. There are multiple ways to arrive at a natural bilinear pairing $\Ext V^*\times\Ext V \to \K$ which may be defined by $$ \form{v_1^*\wedge\cdots\wedge v_k^*,\; v_1\wedge\cdots\wedge v_l} = \delta_{kl}\det\bigl(v_i^*(v_j)\bigr)_{i,j=1}^k $$ for any $v_1^*,\dotsc, v_k^* \in V^*$ and any $v_1,\dotsc,v_l \in V$. This pairing is non-degenerate and furnishes an isomorphism $\Ext V^* \to (\Ext V)^*$ via $X^* \mapsto \form{X^*, {-}}$.

In this way, a differential form is necessarily a section of the cotangent bundle $T^*M$.

The (left) interior product $\lintr : \Ext V\times\Ext V^* \to \Ext V^*$ is given by the adjoints of the exterior product: for $X^* \in \Ext V^*$ and $Y, Z \in \Ext V$ $$ \form{Y\lintr X^*, Z} = \form{X^*, Y\wedge Z}. $$ It is easy to confirm that $v\lintr v^* = v^*(v)$ for $v \in V$ and $v^* \in V^*$, and that more generally $X\lintr X^* = \form{X^*, X}$. This gives us the interpretation of a coblade $X^* \in \Blades kV^*$ as a subspace of $V$: $$ [X^*] = \{v \in V \;:\; v\lintr X^* = 0\}. $$ Under this interpretation, covectors $v^*, w^* \in V^*$ are hyperplanes; when linearly independent, their exterior product $v^*\wedge w^*$ is the intersection of these hyperplanes. In general, if $X^*, Y^*$ are coblades then $X^*\wedge Y^*$ is their intersection unless there is a hyperplane $H \subset V$ with $[X^*] \subseteq H$ and $[Y^*] \subseteq H$, in which case $X^*\wedge Y^* = 0$.

Applying this interpretation to a differential form $\omega$ is not helpful, however. $[\omega_x]$ tells us what $\omega$ annihilates at $x$, and so $\omega_x(X)$ for $X \in \Ext T_xM$ tells us (up to orientation) "how far away" $[X]$ is from $[\omega_x]$. But we want to know what $\omega$ measures when integrated, not what it ignores.

Given coordinates $(x^i)_{i=1}^n$ for open $U \subseteq M$, there is an associated basis $\{e_i(x)\}_{i=1}^n \subset T_xM$ for each $x \in U$ defined by $$ e_i(x) = \left.\frac\partial{\partial x^i}\right|_x, $$ where we're adopting the usual practice of identifying $T_xM$ with the space of directional derivatives evaluated at $x$. Every basis has a unique dual basis $\{e^i\}_{i=1}^n \subset T^*_xM$ such that $\form{e^i, e_j} = \delta_{ij}$. In fact, $e^i$ is exactly the differential of the coordinate function $x \mapsto x^i$, and we adopt the notation $\dd x^i = e^i$.

The linear maps $\xi_x : T_xM \to T^*_xM$ which take $e_i \mapsto \xi(e_i) = e^i$ define a "metric" on $U$ via $g(u, v) = \xi(u)(v)$. This is what allows us to interpret differential forms expressed in coordinates satisfactorily. If $[X^*]$ is what $X^*$ ignores, then $[X^*]^\perp$ under the metric $g$ is exactly what it measures. For instance,

  • $[\dd x^i]^\perp$ is the line orthogonal to the hyperplane $x^i = 0$.
  • $[\dd x^1\wedge\dd x^2]^\perp$ is the plane orthogonal to the intersection of the hyperplanes $x^1 = 0$ and $x^2 = 0$.

More practically, $\xi_x$ extends to an isomorphism $\Ext T_xM \cong \Ext T^*_xM$ whence $[X^*]^\perp = [\xi^{-1}(X^*)]$. This means that we can interpret e.g. $\dd x^i$ as measuring how close a vector is to the $\xi^{-1}(\dd x^i) = e_i$ line, or $\dd x^1\wedge\dd x^2$ as measuring how close a plane is to the $e_1\wedge e_2$ plane.

$\endgroup$
0
$\begingroup$

To integrate a differential $k$-form $\omega$ over a smooth $k$-manifold $M$, we do the following:

  1. Chop up $M$ into tiny pieces, so that each piece is (approximately) a tiny parallelopiped.
  2. Compute the contribution of each piece. If the ith piece is (approximately) a parallelopiped based at point $p \in M$ and spanned by tangent vectors $v_1, \ldots, v_k$, then the contribution of the ith piece is $\omega_p(v_1, \ldots, v_k)$.
  3. Add up all the individual contributions.

In step 2, if two of the tangent vectors are equal, then the parallelopiped is degenerate and its contribution to the integral should be $0$. It can be shown that if the multilinear function $\omega_p$ has this property then $\omega_p$ is alternating.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .