$
\newcommand\form[1]{\langle#1\rangle}
\newcommand\K{\mathbb K}
\newcommand\R{\mathbb R}
\newcommand\linspan{\mathrm{span}}
\newcommand\Extsym{{\textstyle\bigwedge}}
\newcommand\Ext{\mathop\Extsym}
\newcommand\Blades[1]{\mathop{\Extsym^{\!(k)}}}
\newcommand\MVects[1]{\mathop{\Extsym^{\!k}}}
\newcommand\dd{\mathrm d}
\newcommand\lintr{\mathbin{{\lrcorner}}}
$
Here is a perspective I do not quite see represented. To begin, we consider an $n$-dimensional real vector space $V$, though a lot of this generalizes to other fields.
What we wish to do now is find an algebraic way to represent subspaces of $V$. Given a hypothetical object $X$ representing a subspace $[X] \subseteq V$, we want a product $\wedge$ such that
$$
[X] = \{v \in V \;:\; v\wedge X = 0\}.
$$
But we'd also like $v \in V$ to represent itself; hence the axiom
$$
v\wedge v = 0.
$$
For $[X]$ to be a subspace, $v\wedge X$ ought to be linear in $v$.
Given the desire
$$
v\wedge X = 0 \iff X\wedge v = 0
$$
we assume bilinearity of $\wedge$.
We also will suppose that $[v\wedge v'] = \linspan\{v, v'\}$; then for $u, v, w \in V$
$$
u\wedge(v\wedge w) = 0 \iff u = av + bw \iff bw = u - av \implies (u\wedge v)\wedge(bw) = 0.
$$
If $b = 0$ then $(u\wedge v)\wedge w = 0$ since $u\wedge v = 0$. If $b \not= 0$, then we still get $(u\wedge v)\wedge w = 0$. A similar argument in the reverse direction then shows
$$
u\wedge(v\wedge w) = 0 \iff (u\wedge v)\wedge w = 0.
$$
We deem it reasonable to assume that $\wedge$ is associative on vectors.
We have arrived at the following assumptions on $\wedge$:
- Bilinearity,
- Associativity,
- $\forall v \in V.\, v\wedge v = 0$,
and it turns out that these make the conditions
$$
v\wedge X = 0 \iff X\wedge v = 0,\quad
[v\wedge v'] = \linspan\{v, v'\}
$$
redundant.
We are lead directly to the exterior algebra
$\Ext V$, and we find that
$$
[v_1\wedge v_2\wedge\cdots\wedge v_k] = \linspan\{v_1,v_2,\dotsc,v_k\}
$$
so we need look no further.
(Note though that $[0] = V$.)
We may thus represent $k$-subspaces $[X] \subseteq V$ with elements $X \in \Blades kV$ where
$$
\Blades kV = \{v_1\wedge\cdots\wedge v_k \;:\; v_1,\dotsc,v_k \in V\}.
$$
We will call elements of $\Blades kV$ $k$-blades.
We also define $\MVects kV$ to be the set of all sums of $k$-blades,
and call its elements grade-$k$ multivectors, or just $k$-vectors.
The exterior algebra is a direct sum of grades:
$$
\Ext V = \bigoplus_{k=0}^n\MVects kV.
$$
Let $M$ be a differentiable manifold.
A submanifold $S \subseteq M$ has at each point $x \in M$
a unique tangent subspace $S_x \subseteq T_xM$.
Let us realize each $S_x$ as a blade in $\Ext V$.
We define a quilted $k$-surface as a $k$-dimensional submanifold $S \subseteq M$
together with a map $M \to \Blades kT_xM$ written $x \mapsto S_x$
such that $S_x = 0$ iff $x \not\in S$
and such that $[S_x] = T_xS \subseteq T_xM$.
An integral $I$ takes a submanifold $S$ and returns a scalar $I(S)$.
Heuristically, such an integral should assign a weight $w_x$ to each $x \in S$, from which
$$
I(S) = \sum_{x\in S}w_x\epsilon,
$$
where $\epsilon$ is an infinitesimal $k$-volume.
When $S$ is a quilted $k$-surface,
we deem it reasonable that $w_x = I_x(S_x)$ where $I_x : \Blades kT_xM \to \K$.
Integrating over two surfaces $S, S'$ independently should yield the sum of their integrals.
Define the formal sum of two quilted surfaces as a multi-surface where:
$$
S + S' = S\cup S',\quad (S+S')_x = S_x + S'_x.
$$
A quilted surface is also defined to be a multi-surface;
define the sum of two general multi-surfaces analogously.
Then it stands to reason that
$$
I_x(S_x + S'_x) = I_x(S_x) + I_x(S'_x).
$$
The scaling of $S$ by $a \in \K$ should result in its tangent spaces being scaled: $(aS)_x = aS_x$.
However, we could also achieve a scaling of $S$ by scaling the $k$-volume $\epsilon \mapsto a\epsilon$.
(Both of these viewpoints are also reasonable when $a$ includes a change in orientation.)
It follows that
$$
\epsilon I_x(aS_x) = a\epsilon I_x(S_x) \implies I_x(aS_x) = aI_x(S_x).
$$
So each $I_x$ is a linear form on grade-$k$ multivectors;
since the exterior algebra is a direct sum of grades,
we can let each $I_x$ be a linear form on $\Ext T_xM$, i.e. $I_x \in (\Ext T_xM)^*$.
(This can be thought of as just bundling together integrals for submanifolds of all dimensions.)
Our goal is now to describe $(\Ext V)^*$ for a vector space $V$.
There are multiple ways to arrive at a natural bilinear pairing $\Ext V^*\times\Ext V \to \K$
which may be defined by
$$
\form{v_1^*\wedge\cdots\wedge v_k^*,\;
v_1\wedge\cdots\wedge v_l}
= \delta_{kl}\det\bigl(v_i^*(v_j)\bigr)_{i,j=1}^k
$$
for any $v_1^*,\dotsc, v_k^* \in V^*$ and any $v_1,\dotsc,v_l \in V$.
This pairing is non-degenerate and furnishes an isomorphism $\Ext V^* \to (\Ext V)^*$ via $X^* \mapsto \form{X^*, {-}}$.
In this way, a differential form is necessarily a section of the cotangent bundle $T^*M$.
The (left) interior product $\lintr : \Ext V\times\Ext V^* \to \Ext V^*$
is given by the adjoints of the exterior product: for $X^* \in \Ext V^*$ and $Y, Z \in \Ext V$
$$
\form{Y\lintr X^*, Z} = \form{X^*, Y\wedge Z}.
$$
It is easy to confirm that $v\lintr v^* = v^*(v)$ for $v \in V$ and $v^* \in V^*$,
and that more generally $X\lintr X^* = \form{X^*, X}$.
This gives us the interpretation of a coblade $X^* \in \Blades kV^*$ as a subspace of $V$:
$$
[X^*] = \{v \in V \;:\; v\lintr X^* = 0\}.
$$
Under this interpretation, covectors $v^*, w^* \in V^*$ are hyperplanes;
when linearly independent, their exterior product $v^*\wedge w^*$
is the intersection of these hyperplanes.
In general, if $X^*, Y^*$ are coblades then $X^*\wedge Y^*$ is their intersection
unless there is a hyperplane $H \subset V$ with $[X^*] \subseteq H$ and $[Y^*] \subseteq H$,
in which case $X^*\wedge Y^* = 0$.
Applying this interpretation to a differential form $\omega$ is not helpful, however.
$[\omega_x]$ tells us what $\omega$ annihilates at $x$,
and so $\omega_x(X)$ for $X \in \Ext T_xM$ tells us (up to orientation)
"how far away" $[X]$ is from $[\omega_x]$.
But we want to know what $\omega$ measures when integrated, not what it ignores.
Given coordinates $(x^i)_{i=1}^n$ for open $U \subseteq M$,
there is an associated basis $\{e_i(x)\}_{i=1}^n \subset T_xM$ for each $x \in U$ defined by
$$
e_i(x) = \left.\frac\partial{\partial x^i}\right|_x,
$$
where we're adopting the usual practice of identifying $T_xM$
with the space of directional derivatives evaluated at $x$.
Every basis has a unique dual basis $\{e^i\}_{i=1}^n \subset T^*_xM$
such that $\form{e^i, e_j} = \delta_{ij}$.
In fact, $e^i$ is exactly the differential of the coordinate function $x \mapsto x^i$,
and we adopt the notation $\dd x^i = e^i$.
The linear maps $\xi_x : T_xM \to T^*_xM$ which take $e_i \mapsto \xi(e_i) = e^i$
define a "metric" on $U$ via $g(u, v) = \xi(u)(v)$.
This is what allows us to interpret differential forms expressed in coordinates satisfactorily.
If $[X^*]$ is what $X^*$ ignores,
then $[X^*]^\perp$ under the metric $g$ is exactly what it measures.
For instance,
- $[\dd x^i]^\perp$ is the line orthogonal to the hyperplane $x^i = 0$.
- $[\dd x^1\wedge\dd x^2]^\perp$ is the plane
orthogonal to the intersection of the hyperplanes $x^1 = 0$ and $x^2 = 0$.
More practically, $\xi_x$ extends to an isomorphism $\Ext T_xM \cong \Ext T^*_xM$
whence $[X^*]^\perp = [\xi^{-1}(X^*)]$.
This means that we can interpret e.g. $\dd x^i$
as measuring how close a vector is to the $\xi^{-1}(\dd x^i) = e_i$ line,
or $\dd x^1\wedge\dd x^2$ as measuring how close a plane is to the $e_1\wedge e_2$ plane.