1
$\begingroup$

I'm studying relativity and I lost track of interpretation along the mathematical formalism.

What does vector product mean as an event? I mean, how must one interpret the result of the vector product of two (say time-like for closeness to the common sense) vectors in a 4-dim Minkowski spacetime?

$\endgroup$
4
  • 2
    $\begingroup$ "as an event"...what? Also, why do you think that the abstract mathematical operation has an interpretation? In many cases, there will be one (e.g. when computing the product of a vector with itself), but in general, mathematical operations are not required to have direct physical interpretations. $\endgroup$
    – ACuriousMind
    Commented Jul 7, 2015 at 16:30
  • 2
    $\begingroup$ I wouldn't even know what you mean by that in 3D space.. You seem to imply that you know how to assign such a meaning in 3D space. It would definitely help if you tell us what that is for you, so we can help generalise it. Also note that "vector product" means "cross product" in 3D (see wikipedia) -- a concept that you can't generalise to vectors in 4D, so you are probably referring to the "dot product". $\endgroup$
    – Daniel
    Commented Jul 7, 2015 at 17:36
  • 1
    $\begingroup$ I take "event" here to mean a particular location and moment--a position in spacetime, in other words. Of course, one should always keep in mind that positions in spacetime do not obey the transformation laws of four-vectors. $\endgroup$
    – Muphrid
    Commented Jul 7, 2015 at 17:43
  • $\begingroup$ @Golz Pay attention that an event is NOT a 4-vector. An event is a point $m\in\mathcal{M}$ on the space-time, whose coordinate representation $\varphi(m)=x^{\mu}\textbf{e}_{\mu}\in\mathbb{R}^4$ has components transforming like a vector under a change of basis in $\mathbb{R}^4$. $\endgroup$
    – gented
    Commented Jul 7, 2015 at 20:44

3 Answers 3

3
$\begingroup$

In ordinary vector spaces, the dot product $\cdot$ is a binary operator which takes a pair of vectors $(A,B)$ in the space to the field over which the space is defined. Formally, for a vector space $V$ over a field $K$, the dot product $(\ \ , \ )$ is a bilinear map

$$(\ \ , \ ): V \times V \to K.$$

The inner product only has assumes the standard meaning in certain vector spaces. In the case of Minkowski spacetime, the dot (or inner) product between two four-vectors $A$ and $B$ is

$$(A,B) = A^T \eta B,$$

where $\eta$ is the standard metric with signature $(-, +, +, +)$ or $(+, -, -, -)$. In conventional Einstein summation notation, this is written as

$$(A, B) = \eta_{\mu \nu}A^\mu B^\nu$$

How do we interpret this operation? Well, we cannot use the standard Euclidean notions of distance or direction since we are dealing with hyperbolic space. Instead, it is better to view the product as a Lorentz-invariant quantity that describes the (hyperbolic) geometric relationship between two vectors. That is, one that does not change under a Lorentz transformation $\Lambda \in SO(1,3).$

$\endgroup$
2
  • $\begingroup$ In case (-, +, +, +) $\to$ (+, +, +, +) it should converge to the usual cross product. However your expression does not converge. Because we have $\eta_{\mu \nu}=0$ if $\mu,\nu \neq 0$. Which is not the case for the usual cross-product. $\endgroup$
    – Eddward
    Commented Dec 15, 2020 at 9:05
  • $\begingroup$ One of the best ways to check is to assign the time coordinate $ix_0$ with the imaginary unit and then to check if the same approach works as for all spatial coordinates. $\endgroup$
    – Eddward
    Commented Dec 15, 2020 at 9:16
2
$\begingroup$

$\color{blue}{\boldsymbol{\S 1-}\textbf{In General}}$

If we want to define in a consistent way an outer (or vector or cross) product of two $d\boldsymbol{-}$vectors in a $d\boldsymbol{-}$dimensional linear space with $d\boldsymbol{>}3$ we will realize that the result must be a $\dfrac{d(d-1)}{2}\boldsymbol{-}$vector whose components could be arranged to form a $d\times d\boldsymbol{-}$antisymmetric matrix.

So, in case of the $d\boldsymbol{=}4\boldsymbol{-}$dimensional Minkowski space-time the vector product would be a 6-vector or equivalently a $4\times 4\boldsymbol{-}$antisymmetric matrix and not a 4-vector.

$\color{blue}{\boldsymbol{\S 2-}\textbf{In the Euclidean real space }\mathbb{R}^3}$

Starting with the Euclidean real space $\mathbb{R}^3$, let two real 3-vectors represented by one-column matrices or their transposed one-row matrices \begin{align} \mathbf{a} & \boldsymbol{=} \begin{bmatrix} \mathrm a_1\vphantom{\dfrac{a}{b}}\\ \mathrm a_2\vphantom{\dfrac{a}{b}}\\ \mathrm a_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \qquad \qquad \mathbf{a}^{\boldsymbol{\top}}\boldsymbol{=} \begin{bmatrix} \mathrm a_1 & \mathrm a_2 & \mathrm a_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{01a}\label{01a}\\ \mathbf{b} & \boldsymbol{=} \begin{bmatrix} \mathrm b_1\vphantom{\dfrac{a}{b}}\\ \mathrm b_2\vphantom{\dfrac{a}{b}}\\ \mathrm b_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \qquad \qquad \mathbf{b}^{\boldsymbol{\top}}\boldsymbol{=} \begin{bmatrix} \mathrm b_1 & \mathrm b_2 & \mathrm b_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{01b}\label{01b} \end{align} We define the vector product of above two vectors as the following $3\times 3\boldsymbol{-}$antisymmetric matrix (simple expressions in brackets are matrices) \begin{align} \left[\,\mathbf{h}\,\right] & \boldsymbol{=}\left[\,\mathbf{a}\boldsymbol{\times}\mathbf{b}\,\right]\boldsymbol{\equiv}\mathbf{b}\mathbf{a}^{\boldsymbol{\top}}\boldsymbol{-}\mathbf{a}\mathbf{b}^{\boldsymbol{\top}}\boldsymbol{=} \begin{bmatrix} \mathrm b_1\vphantom{\dfrac{a}{b}}\\ \mathrm b_2\vphantom{\dfrac{a}{b}}\\ \mathrm b_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \begin{bmatrix} \mathrm a_1 & \mathrm a_2 & \mathrm a_3\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{-} \begin{bmatrix} \mathrm a_1\vphantom{\dfrac{a}{b}}\\ \mathrm a_2\vphantom{\dfrac{a}{b}}\\ \mathrm a_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \begin{bmatrix} \mathrm b_1 & \mathrm b_2 & \mathrm b_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \nonumber\\ & \boldsymbol{=} \begin{bmatrix} \mathrm b_1\mathrm a_1 & \mathrm b_1\mathrm a_2 & \mathrm b_1\mathrm a_3\vphantom{\dfrac{a}{b}}\\ \mathrm b_2\mathrm a_1 & \mathrm b_2\mathrm a_2 & \mathrm b_2\mathrm a_3\vphantom{\dfrac{a}{b}}\\ \mathrm b_3\mathrm a_1 & \mathrm b_3\mathrm a_2 & \mathrm b_3\mathrm a_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{-} \begin{bmatrix} \mathrm a_1\mathrm b_1 & \mathrm a_1\mathrm b_2 & \mathrm a_1\mathrm b_3 \vphantom{\dfrac{a}{b}}\\ \mathrm a_2\mathrm b_1 & \mathrm a_2\mathrm b_2 & \mathrm a_2\mathrm b_3 \vphantom{\dfrac{a}{b}}\\ \mathrm a_3\mathrm b_1 & \mathrm a_3\mathrm b_2 & \mathrm a_3\mathrm b_3 \vphantom{\dfrac{a}{b}}\\ \end{bmatrix} \tag{02}\label{02} \end{align} that is \begin{equation} \left[\,\mathbf{h}\,\right] \boldsymbol{=}\left[\,\mathbf{a}\boldsymbol{\times}\mathbf{b}\,\right]\boldsymbol{\equiv} \begin{bmatrix} \hphantom{\boldsymbol{=\:}}0 & \boldsymbol{-}\left(\mathrm a_1\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_1\right) & \boldsymbol{+}\left(\mathrm a_3\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_3\right)\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\left(\mathrm a_1\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_1\right) & \hphantom{\boldsymbol{=}\:}0 & \boldsymbol{-}\left(\mathrm a_2\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_2\right)\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\left(\mathrm a_3\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_3\right) & \boldsymbol{+}\left(\mathrm a_2\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_2\right) & \hphantom{\boldsymbol{=}\:}0 \vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{03}\label{03} \end{equation} But the number of independent elements of this antisymmetric matrix is accidentally equal to the dimension of $\mathbb{R}^3$, so we alternatively define as vector product of the two 3-vectors the following 3-vector \begin{equation} \mathbf{h} \boldsymbol{=} \begin{bmatrix} \mathrm h_1\vphantom{\dfrac{a}{b}}\\ \mathrm h_2\vphantom{\dfrac{a}{b}}\\ \mathrm h_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \mathrm a_2\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_2\vphantom{\dfrac{a}{b}}\\ \mathrm a_3\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_3\vphantom{\dfrac{a}{b}}\\ \mathrm a_1\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_1\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=}\mathbf{a}\boldsymbol{\times}\mathbf{b} \tag{04}\label{04} \end{equation} Note that the matrix $\left[\,\mathbf{h}\,\right]$ of equation \eqref{03} is expressed via the vector $\mathbf{h}$ of \eqref{04} as \begin{equation} \left[\,\mathbf{h}\,\right] \boldsymbol{=}\left[\,\mathbf{a}\boldsymbol{\times}\mathbf{b}\,\right]\boldsymbol{\equiv} \begin{bmatrix} \hphantom{\boldsymbol{=}}0 & \boldsymbol{-}\mathrm h_3 & \boldsymbol{+}\mathrm h_2\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\mathrm h_3 & \hphantom{\boldsymbol{=}}0 &\boldsymbol{-}\mathrm h_1\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\mathrm h_2 & \boldsymbol{+}\mathrm h_1 & \hphantom{\boldsymbol{=}}0 \vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{05}\label{05} \end{equation}

$\color{blue}{\boldsymbol{\S 3-}\textbf{In the Euclidean real space }\mathbb{R}^4}$

Following the relevant steps as in $\boldsymbol{\S 2}$ we have \begin{align} \mathbf{A} & \boldsymbol{=} \begin{bmatrix} \mathrm a_1\vphantom{\dfrac{a}{b}}\\ \mathrm a_2\vphantom{\dfrac{a}{b}}\\ \mathrm a_3\vphantom{\dfrac{a}{b}}\\ \mathrm a_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \vphantom{\dfrac{a}{b}}\\ \mathbf{a}\vphantom{\dfrac{a}{b}}\\ \vphantom{\dfrac{a}{b}}\\ \mathrm a_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \,,\qquad \mathbf{A}^{\boldsymbol{\top}}\boldsymbol{=} \begin{bmatrix} \mathrm a_1 & \mathrm a_2 & \mathrm a_3 & \mathrm a_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \hphantom{\mathrm a_1} & \mathbf{a}^{\boldsymbol{\top}} & \hphantom{\mathrm a_3} & \mathrm a_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{06a}\label{06a}\\ \mathbf{B} & \boldsymbol{=} \begin{bmatrix} \mathrm b_1\vphantom{\dfrac{a}{b}}\\ \mathrm b_2\vphantom{\dfrac{a}{b}}\\ \mathrm b_3\vphantom{\dfrac{a}{b}}\\ \mathrm b_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \vphantom{\dfrac{a}{b}}\\ \mathbf{b}\vphantom{\dfrac{a}{b}}\\ \vphantom{\dfrac{a}{b}}\\ \mathrm b_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \,,\qquad \mathbf{B}^{\boldsymbol{\top}}\boldsymbol{=} \begin{bmatrix} \mathrm b_1 & \mathrm b_2 & \mathrm b_3 & \mathrm b_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \hphantom{\mathrm b_1} & \mathbf{b}^{\boldsymbol{\top}} & \hphantom{\mathrm b_3} & \mathrm b_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{06b}\label{06b} \end{align} We define the vector product of above two vectors as the following $4\times 4\boldsymbol{-}$antisymmetric matrix \begin{align} \left[\,\mathbf{H}\,\right] & \boldsymbol{=}\left[\,\mathbf{A}\boldsymbol{\times}\mathbf{B}\,\right]\boldsymbol{\equiv}\mathbf{B}\mathbf{A}^{\boldsymbol{\top}}\boldsymbol{-}\mathbf{A}\mathbf{B}^{\boldsymbol{\top}}\boldsymbol{=} \begin{bmatrix} \mathrm b_1\vphantom{\dfrac{a}{b}}\\ \mathrm b_2\vphantom{\dfrac{a}{b}}\\ \mathrm b_3\vphantom{\dfrac{a}{b}}\\ \mathrm b_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \begin{bmatrix} \mathrm a_1 & \mathrm a_2 & \mathrm a_3 & \mathrm a_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{-} \begin{bmatrix} \mathrm a_1\vphantom{\dfrac{a}{b}}\\ \mathrm a_2\vphantom{\dfrac{a}{b}}\\ \mathrm a_3\vphantom{\dfrac{a}{b}}\\ \mathrm a_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \begin{bmatrix} \mathrm b_1 & \mathrm b_2 & \mathrm b_3 & \mathrm b_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \nonumber\\ & \boldsymbol{=} \begin{bmatrix} \mathrm b_1\mathrm a_1 & \mathrm b_1\mathrm a_2 & \mathrm b_1\mathrm a_3 & \mathrm b_1\mathrm a_4\vphantom{\dfrac{a}{b}}\\ \mathrm b_2\mathrm a_1 & \mathrm b_2\mathrm a_2 & \mathrm b_2\mathrm a_3 & \mathrm b_2\mathrm a_4\vphantom{\dfrac{a}{b}}\\ \mathrm b_3\mathrm a_1 & \mathrm b_3\mathrm a_2 & \mathrm b_3\mathrm a_3 & \mathrm b_3\mathrm a_4\vphantom{\dfrac{a}{b}}\\ \mathrm b_4\mathrm a_1 & \mathrm b_4\mathrm a_2 & \mathrm b_4\mathrm a_3 & \mathrm b_4\mathrm a_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{-} \begin{bmatrix} \mathrm a_1\mathrm b_1 & \mathrm a_1\mathrm b_2 & \mathrm a_1\mathrm b_3 & \mathrm a_1\mathrm b_4 \vphantom{\dfrac{a}{b}}\\ \mathrm a_2\mathrm b_1 & \mathrm a_2\mathrm b_2 & \mathrm a_2\mathrm b_3 & \mathrm a_2\mathrm b_4 \vphantom{\dfrac{a}{b}}\\ \mathrm a_3\mathrm b_1 & \mathrm a_3\mathrm b_2 & \mathrm a_3\mathrm b_3 & \mathrm a_3\mathrm b_4 \vphantom{\dfrac{a}{b}}\\ \mathrm a_4\mathrm b_1 & \mathrm a_4\mathrm b_2 & \mathrm a_4\mathrm b_3 & \mathrm a_4\mathrm b_4 \vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{07}\label{07} \end{align} that is \begin{equation} \left[\,\mathbf{H}\,\right] \boldsymbol{=}\left[\,\mathbf{A}\boldsymbol{\times}\mathbf{B}\,\right]\boldsymbol{\equiv} \begin{bmatrix} \hphantom{\boldsymbol{=\:}}0 & \boldsymbol{-}\left(\mathrm a_1\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_1\right) & \boldsymbol{+}\left(\mathrm a_3\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_3\right) & \boldsymbol{+}\left(\mathrm a_4\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_4\right)\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\left(\mathrm a_1\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_1\right) & \hphantom{\boldsymbol{=}\:}0 & \boldsymbol{-}\left(\mathrm a_2\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_2\right) & \boldsymbol{+}\left(\mathrm a_4\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_4\right)\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\left(\mathrm a_3\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_3\right) & \boldsymbol{+}\left(\mathrm a_2\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_2\right) & \hphantom{\boldsymbol{=}\:}0 & \boldsymbol{+}\left(\mathrm a_4\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_4\right)\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\left(\mathrm a_4\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_4\right) & \boldsymbol{-}\left(\mathrm a_4\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_4\right) & \boldsymbol{-}\left(\mathrm a_4\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_4\right) & \hphantom{\boldsymbol{=}\:}0 \vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{08}\label{08} \end{equation} The 6 independent elements of $\left[\,\mathbf{H}\,\right]$ constitute the real 6-vector $\mathbf{H}$ with components \begin{equation} \begin{matrix} \mathrm H_1\boldsymbol{=} \mathrm a_2\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_2 && \mathrm H_2\boldsymbol{=} \mathrm a_3\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_3 &&\mathrm H_3\boldsymbol{=} \mathrm a_1\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_1\vphantom{\dfrac{a}{b}}\\ \mathrm H_4\boldsymbol{=} \mathrm a_4\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_4 && \mathrm H_5\boldsymbol{=} \mathrm a_4\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_4 &&\mathrm H_6\boldsymbol{=} \mathrm a_4\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_4\vphantom{\dfrac{a}{b}} \end{matrix} \tag{09}\label{09} \end{equation} As a preparation for the case of 4-dimensional Minskowski space-time in $\boldsymbol{\S 4}$ let's define as in $\boldsymbol{\S 2}$ \begin{equation} \mathbf{h} \boldsymbol{=} \begin{bmatrix} \mathrm H_1\vphantom{\dfrac{a}{b}}\\ \mathrm H_2\vphantom{\dfrac{a}{b}}\\ \mathrm H_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \mathrm a_2\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_2\vphantom{\dfrac{a}{b}}\\ \mathrm a_3\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_3\vphantom{\dfrac{a}{b}}\\ \mathrm a_1\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_1\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=}\mathbf{a}\boldsymbol{\times}\mathbf{b} \tag{10}\label{10} \end{equation} so \begin{equation} \left[\,\mathbf{h}\,\right] \boldsymbol{=}\left[\,\mathbf{a}\boldsymbol{\times}\mathbf{b}\,\right]\boldsymbol{\equiv} \begin{bmatrix} \hphantom{\boldsymbol{=}}0 & \boldsymbol{-}\mathrm H_3 & \boldsymbol{+}\mathrm H_2\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\mathrm H_3 & \hphantom{\boldsymbol{=}}0 &\boldsymbol{-}\mathrm H_1\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\mathrm H_2 & \boldsymbol{+}\mathrm H_1 & \hphantom{\boldsymbol{=}}0 \vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{11}\label{11} \end{equation} and moreover \begin{equation} \mathbf{g} \boldsymbol{=} \begin{bmatrix} \mathrm H_4\vphantom{\dfrac{a}{b}}\\ \mathrm H_5\vphantom{\dfrac{a}{b}}\\ \mathrm H_6\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \mathrm a_4\mathrm b_1\boldsymbol{-}\mathrm a_1\mathrm b_4\vphantom{\dfrac{a}{b}}\\ \mathrm a_4\mathrm b_2\boldsymbol{-}\mathrm a_2\mathrm b_4\vphantom{\dfrac{a}{b}}\\ \mathrm a_4\mathrm b_3\boldsymbol{-}\mathrm a_3\mathrm b_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=}\mathrm a_4\mathbf{b}\boldsymbol{-}\mathrm b_4\mathbf{a} \tag{12}\label{12} \end{equation} Then the real 6-vector $\mathbf{H}$ of equation \eqref{09} is in block form \begin{equation} \mathbf{H} \boldsymbol{=} \begin{bmatrix} \mathbf{h}\vphantom{\dfrac{\tfrac{a}{b}}{b}}\\ \mathbf{g}\vphantom{\dfrac{a}{\tfrac{a}{b}}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \mathbf{a}\boldsymbol{\times}\mathbf{b}\vphantom{\dfrac{\tfrac{a}{b}}{b}}\\ a_4\mathbf{b}\boldsymbol{-}b_4\mathbf{a}\vphantom{\dfrac{a}{\tfrac{a}{b}}} \end{bmatrix} \tag{13}\label{13} \end{equation} and the matrix $\left[\,\mathbf{H}\,\right]$ of equation \eqref{08} is in block form \begin{equation} \left[\,\mathbf{H}\,\right] \boldsymbol{=}\left[\,\mathbf{A}\boldsymbol{\times}\mathbf{B}\,\right]\boldsymbol{\equiv} \begin{bmatrix} \begin{array}{ccc|c} \hphantom{\boldsymbol{=}}0 & \boldsymbol{-}\mathrm H_3 & \boldsymbol{+}\mathrm H_2 & \boldsymbol{+}\mathrm H_4\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\mathrm H_3 & \hphantom{\boldsymbol{=}}0 & \boldsymbol{-}\mathrm H_1 & \boldsymbol{+}\mathrm H_5\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\mathrm H_2 & \boldsymbol{+}\mathrm H_1 & \hphantom{\boldsymbol{=}}0 & \boldsymbol{+}\mathrm H_6\vphantom{\dfrac{a}{b}}\\ \hline \boldsymbol{-}\mathrm H_4 & \boldsymbol{-}\mathrm H_5 & \boldsymbol{-}\mathrm H_6 & \hphantom{\boldsymbol{=}}0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \left[\,\mathbf{a}\boldsymbol{\times}\mathbf{b}\,\right] & & \left(a_4\mathbf{b}\boldsymbol{-}b_4\mathbf{a}\right) \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\left(a_4\mathbf{b}\boldsymbol{-}b_4\mathbf{a}\right)^{\boldsymbol{\top}} & & 0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \tag{14}\label{14} \end{equation}

$\color{blue}{\boldsymbol{\S 4-}\textbf{In the 4-dimensional Minskowski space-time}}$

Consider as in $\boldsymbol{\S 3}$ two 4-vectors in the 4-dimensional Minskowski space-time \begin{align} \mathbf{X} & \boldsymbol{=} \begin{bmatrix} x_1\vphantom{\dfrac{a}{b}}\\ x_2\vphantom{\dfrac{a}{b}}\\ x_3\vphantom{\dfrac{a}{b}}\\ x_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \vphantom{\dfrac{a}{b}}\\ \mathbf{x}\vphantom{\dfrac{a}{b}}\\ \vphantom{\dfrac{a}{b}}\\ x_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \,,\qquad \mathbf{X}^{\boldsymbol{\top}}\boldsymbol{=} \begin{bmatrix} x_1 & x_2 & x_3 & x_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \hphantom{x_1} & \mathbf{x}^{\boldsymbol{\top}} & \hphantom{x_3} & x_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{15a}\label{15a}\\ \mathbf{P} & \boldsymbol{=} \begin{bmatrix} p_1\vphantom{\dfrac{a}{b}}\\ p_2\vphantom{\dfrac{a}{b}}\\ p_3\vphantom{\dfrac{a}{b}}\\ p_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \vphantom{\dfrac{a}{b}}\\ \mathbf{p}\vphantom{\dfrac{a}{b}}\\ \vphantom{\dfrac{a}{b}}\\ p_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \,,\qquad \mathbf{P}^{\boldsymbol{\top}}\boldsymbol{=} \begin{bmatrix} p_1 & p_2 & p_3 & p_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \hphantom{p_1} & \mathbf{p}^{\boldsymbol{\top}} & \hphantom{p_3} & p_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{15b}\label{15b} \end{align} Note that in $\boldsymbol{\S 3}$ splitting a 4-vector in a 3-vector and a scalar is formal, for example $\mathbf{A}\boldsymbol{=}\left(\mathbf{a},\mathrm a_4\right)$ in equation \eqref{06a}, while here this splitting is essential since the 3-vector and the scalar parts are the space and time parts respectively of a Lorentz 4-vector, for example $\mathbf{X}\boldsymbol{=}\left(\mathbf{x},x_4\right)$ in equation \eqref{15a}. As in $\boldsymbol{\S 3}$ we define the vector product of these Lorentz 4-vectors as the following antisymmetric matrix \begin{align} \left[\,\mathbf{H}\,\right] & \boldsymbol{=}\left[\,\mathbf{X}\boldsymbol{\times}\mathbf{P}\,\right]\boldsymbol{\equiv}\mathbf{P}\mathbf{X}^{\boldsymbol{\top}}\boldsymbol{-}\mathbf{X}\mathbf{P}^{\boldsymbol{\top}}\boldsymbol{=} \begin{bmatrix} p_1\vphantom{\dfrac{a}{b}}\\ p_2\vphantom{\dfrac{a}{b}}\\ p_3\vphantom{\dfrac{a}{b}}\\ p_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \begin{bmatrix} x_1 & x_2 & x_3 & x_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{-} \begin{bmatrix} x_1\vphantom{\dfrac{a}{b}}\\ x_2\vphantom{\dfrac{a}{b}}\\ x_3\vphantom{\dfrac{a}{b}}\\ x_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \begin{bmatrix} p_1 & p_2 & p_3 & p_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \nonumber\\ & \boldsymbol{=} \begin{bmatrix} \hphantom{\boldsymbol{=\:}}0 & \boldsymbol{-}\left(x_1p_2\boldsymbol{-}x_2p_1\right) & \boldsymbol{+}\left(x_3p_1\boldsymbol{-}x_1p_3\right) & \boldsymbol{+}\left(x_4p_1\boldsymbol{-}x_1p_4\right)\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\left(x_1p_2\boldsymbol{-}x_2p_1\right) & \hphantom{\boldsymbol{=}\:}0 & \boldsymbol{-}\left(x_2p_3\boldsymbol{-}x_3p_2\right) & \boldsymbol{+}\left(x_4p_2\boldsymbol{-}x_2p_4\right)\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\left(x_3p_1\boldsymbol{-}x_1p_3\right) & \boldsymbol{+}\left(x_2p_3\boldsymbol{-}x_3p_2\right) & \hphantom{\boldsymbol{=}\:}0 & \boldsymbol{+}\left(x_4p_3\boldsymbol{-}x_3p_4\right)\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\left(x_4p_1\boldsymbol{-}x_1p_4\right) & \boldsymbol{-}\left(x_4p_2\boldsymbol{-}x_2p_4\right) & \boldsymbol{-}\left(x_4p_3\boldsymbol{-}x_3p_4\right) & \hphantom{\boldsymbol{=}\:}0 \vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{16}\label{16} \end{align} Defining \begin{equation} \mathbf{h} \boldsymbol{=} \begin{bmatrix} \mathrm h_1\vphantom{\dfrac{a}{b}}\\ \mathrm h_2\vphantom{\dfrac{a}{b}}\\ \mathrm h_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} x_2p_3\boldsymbol{-}x_3p_2\vphantom{\dfrac{a}{b}}\\ x_3p_1\boldsymbol{-}x_1p_3\vphantom{\dfrac{a}{b}}\\ x_1p_2\boldsymbol{-}x_2p_1\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=}\mathbf{x}\boldsymbol{\times}\mathbf{p} \tag{17}\label{17} \end{equation} we have in matrix form \begin{equation} \left[\,\mathbf{h}\,\right] \boldsymbol{=}\left[\,\mathbf{x}\boldsymbol{\times}\mathbf{p}\,\right]\boldsymbol{\equiv} \begin{bmatrix} \hphantom{\boldsymbol{=}}0 & \boldsymbol{-}\mathrm h_3 & \boldsymbol{+}\mathrm h_2\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\mathrm h_3 & \hphantom{\boldsymbol{=}}0 &\boldsymbol{-}\mathrm h_1\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\mathrm h_2 & \boldsymbol{+}\mathrm h_1 & \hphantom{\boldsymbol{=}}0 \vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{18}\label{18} \end{equation} Moreover we define \begin{equation} \mathbf{g} \boldsymbol{=} \begin{bmatrix} \mathrm g_1\vphantom{\dfrac{a}{b}}\\ \mathrm g_2\vphantom{\dfrac{a}{b}}\\ \mathrm g_3\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} x_4p_1\boldsymbol{-}x_1p_4\vphantom{\dfrac{a}{b}}\\ x_4p_2\boldsymbol{-}x_2p_4\vphantom{\dfrac{a}{b}}\\ x_4p_3\boldsymbol{-}x_3p_4\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=}x_4\mathbf{p}\boldsymbol{-}p_4\mathbf{x} \tag{19}\label{19} \end{equation} The real 6-vector $\mathbf{H}$, whose components are the six independent elements of the antisymmetric matrix $[\,\mathbf{H}\,]$ of equation \eqref{16}, is in block form \begin{equation} \mathbf{H} \boldsymbol{=} \begin{bmatrix} \mathbf{h}\vphantom{\dfrac{\tfrac{a}{b}}{b}}\\ \mathbf{g}\vphantom{\dfrac{a}{\tfrac{a}{b}}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \mathbf{x}\boldsymbol{\times}\mathbf{p}\vphantom{\dfrac{\tfrac{a}{b}}{b}}\\ x_4\mathbf{p}\boldsymbol{-}p_4\mathbf{x}\vphantom{\dfrac{a}{\tfrac{a}{b}}} \end{bmatrix} \tag{20}\label{20} \end{equation} and the matrix $\left[\,\mathbf{H}\,\right]$ of equation \eqref{16} is in block form \begin{equation} \left[\,\mathbf{H}\,\right] \boldsymbol{=}\left[\,\mathbf{X}\boldsymbol{\times}\mathbf{P}\,\right]\boldsymbol{\equiv} \begin{bmatrix} \begin{array}{ccc|c} \hphantom{\boldsymbol{=}}0 & \boldsymbol{-}\mathrm h_3 & \boldsymbol{+}\mathrm h_2 & \boldsymbol{+}\mathrm g_1\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\mathrm h_3 & \hphantom{\boldsymbol{=}}0 & \boldsymbol{-}\mathrm h_1 & \boldsymbol{+}\mathrm g_2\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\mathrm h_2 & \boldsymbol{+}\mathrm h_1 & \hphantom{\boldsymbol{=}}0 & \boldsymbol{+}\mathrm g_3\vphantom{\dfrac{a}{b}}\\ \hline \boldsymbol{-}\mathrm g_1 & \boldsymbol{-}\mathrm g_2 & \boldsymbol{-}\mathrm g_3 & \hphantom{\boldsymbol{=}}0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \left[\,\mathbf{h}\,\right] & & \mathbf{g} \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\mathbf{g}^{\boldsymbol{\top}} & & 0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \tag{21}\label{21} \end{equation} Note that if the 4-vectors $\mathbf{X},\mathbf{P}$ in equations \eqref{15a},\eqref{15b} are the space-time position and relativistic linear momentum respectively of a particle \begin{equation} \mathbf{X} \boldsymbol{=}\left(\mathbf{x}, ct\right) \qquad \mathbf{P} \boldsymbol{=}\left(\gamma m_{0}\mathbf{u}, \gamma m_{0} c\right) \tag{22}\label{22} \end{equation} then the matrix $\left[\,\mathbf{H}\,\right]$ of equation \eqref{16} or \eqref{21} could be the definition of the $\textbf{relativistic angular momentum}$ \begin{equation} \left[\,\mathbf{H}\,\right] \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \left[\,\mathbf{x}\boldsymbol{\times}\mathbf{p}\,\right] & & \left(ct\mathbf{p}\boldsymbol{-}\gamma m_{0}c\mathbf{x}\vphantom{\tfrac{a}{b}}\right) \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\left(ct\,\mathbf{p}\boldsymbol{-}\gamma m_{0}c\,\mathbf{x}\vphantom{\tfrac{a}{b}}\right)^{\boldsymbol{\top}} & & 0\vphantom{\dfrac{\tfrac{a}{b}}{b}} \end{array} \end{bmatrix} \tag{23}\label{23} \end{equation} The real 6-vector $\mathbf{H}$ is \begin{equation} \mathbf{H} \boldsymbol{=} \begin{bmatrix} \mathbf{h}\vphantom{\dfrac{\tfrac{a}{b}}{b}}\\ \mathbf{g}\vphantom{\dfrac{a}{\tfrac{a}{b}}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \mathbf{x}\boldsymbol{\times}\mathbf{p}\vphantom{\dfrac{\tfrac{a}{b}}{b}}\\ ct\mathbf{p}\boldsymbol{-}\gamma m_{0}c\mathbf{x}\vphantom{\dfrac{a}{\tfrac{a}{b}}} \end{bmatrix} \tag{24}\label{24} \end{equation} the vector $\mathbf{h}$ being the 3-vector angular momentum. We could define in a similar way outer products of various 4-vectors and express by them relativistic equations corresponding to the non-relativistic ones. For example, for the 4-vector $\mathbf{F}$ of a pure 3-vector force $\mathbf{f}$ applied on a particle of 3-vector velocity $\mathbf{u}$ \begin{equation} \mathbf{F} \boldsymbol{=} \begin{bmatrix} f_1\vphantom{\dfrac{a}{b}}\\ f_2\vphantom{\dfrac{a}{b}}\\ f_3\vphantom{\dfrac{a}{b}}\\ f_4\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \vphantom{\dfrac{a}{b}}\\ \mathbf{f}\vphantom{\dfrac{a}{b}}\\ \vphantom{\dfrac{a}{b}}\\ \dfrac{\mathbf{f}\boldsymbol{\cdot}\mathbf{u}}{c}\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{25}\label{25} \end{equation} we could define its moment about a point by the following antisymmetric matrix \begin{equation} \left[\,\mathbf{M}\,\right] \boldsymbol{=}\left[\,\mathbf{X}\boldsymbol{\times}\mathbf{F}\,\right] \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \left[\,\mathbf{x}\boldsymbol{\times}\mathbf{f}\,\right] & & \left(ct\,\mathbf{f}\boldsymbol{-}\dfrac{\mathbf{f}\boldsymbol{\cdot}\mathbf{u}}{c}\,\mathbf{x}\vphantom{\tfrac{a}{b}}\right) \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\left(ct\,\mathbf{f}\boldsymbol{-}\dfrac{\mathbf{f}\boldsymbol{\cdot}\mathbf{u}}{c}\,\mathbf{x}\vphantom{\tfrac{a}{b}}\right)^{\boldsymbol{\top}} & & 0\vphantom{\dfrac{\dfrac{a}{b}}{b}} \end{array} \end{bmatrix} \tag{26}\label{26} \end{equation} and express that this moment $\left[\,\mathbf{M}\,\right]$ is equal to the rate of change of angular momentum $\left[\,\mathbf{H}\,\right]$ of equation \eqref{23} with respect to the proper time $\tau$ \begin{equation} \left[\,\mathbf{M}\,\right] \boldsymbol{=}\dfrac{\mathrm d\left[\,\mathbf{H}\,\right]}{\mathrm d\tau} \tag{27}\label{27} \end{equation} We could formally define the outer product of 4-vectors where one or both is a 4-vector operator. For example consider the 4-gradient vector operator \begin{equation} \boldsymbol{\Box\!\!\!\!\!\Box\!\!\!\!\!\Box}\boldsymbol{=} \begin{bmatrix} \boldsymbol{-}\partial/\partial x_1\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\partial/\partial x_2\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\partial/\partial x_3\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\partial/\partial c\,t\vphantom{\dfrac{a}{b}}\\ \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}\boldsymbol{\nabla}\vphantom{\dfrac{a}{b}}\\ \vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}\partial/\partial c\,t\vphantom{\dfrac{a}{b}}\\ \end{bmatrix} \tag{28}\label{28} \end{equation} and the electromagnetic potential 4-vector \begin{equation} \boldsymbol{\Phi}\boldsymbol{=} \begin{bmatrix} c\,A_1\vphantom{\dfrac{a}{b}}\\ c\,A_2\vphantom{\dfrac{a}{b}}\\ c\,A_3\vphantom{\dfrac{a}{b}}\\ \phi\vphantom{\dfrac{a}{b}}\\ \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \vphantom{\dfrac{a}{b}}\\ c\mathbf{A}\vphantom{\dfrac{a}{b}}\\ \vphantom{\dfrac{a}{b}}\\ \phi\vphantom{\dfrac{a}{b}}\\ \end{bmatrix} \tag{29}\label{29} \end{equation} Their outer product is the antisymmetric matrix of the electromagnetic field \begin{equation} \mathcal{E\!\!\!\!E} \boldsymbol{=}\boldsymbol{-}\boldsymbol{\Box\!\!\!\!\!\Box\!\!\!\!\!\Box}\boldsymbol{\times}\boldsymbol{\Phi} \boldsymbol{=}\boldsymbol{-} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \left[\,\boldsymbol{-\nabla\times}\left(c\,\mathbf{A}\right)\,\right] & & \left[\dfrac{\partial \left(c\,\mathbf{A}\right)}{\partial \left(c\,t\right)}\boldsymbol{-}\left(\boldsymbol{-\nabla}\phi\right)\right] \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\left[\dfrac{\partial \left(c\,\mathbf{A}\right)}{\partial \left(c\,t\right)}\boldsymbol{-}\left(\boldsymbol{-\nabla}\phi\right)\right]^{\boldsymbol{\top}} & & 0\vphantom{\dfrac{\dfrac{a}{b}}{b}} \end{array} \end{bmatrix} \tag{30}\label{30} \end{equation} that is \begin{equation} \mathcal{E\!\!\!\!E} \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} & & & \vphantom{\dfrac{a}{b}}\\ & \left[\,c\,\mathbf{B}\,\right] & & \boldsymbol{+}\mathbf{E} \vphantom{\dfrac{a}{b}}\\ & & & \vphantom{\dfrac{a}{b}}\\ \hline & \boldsymbol{-}\mathbf{E}^{\boldsymbol{\top}} & & 0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} \begin{array}{ccc|c} 0 & \boldsymbol{-}c\,B_3 & \boldsymbol{+}c\,B_2 & \boldsymbol{+}E_1\vphantom{\dfrac{a}{b}}\\ \boldsymbol{+}c\,B_3 & 0 & \boldsymbol{-}c\,B_1 & \boldsymbol{+}E_2 \vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}c\,B_2 & \boldsymbol{+}c\,B_1 & 0 & \boldsymbol{+}E_3\vphantom{\dfrac{a}{b}}\\ \hline \boldsymbol{-}E_1 & \boldsymbol{-}E_2 & \boldsymbol{-}E_3 & 0\vphantom{\dfrac{a}{b}} \end{array} \end{bmatrix} \tag{31}\label{31} \end{equation}

Outer products of 4-vectors are not useful or in use in modern Physics. If used then they are expressed via the elegant notation of tensor calculus.

$\endgroup$
1
$\begingroup$

I know that it is maybe too late to respond but I found the question on cross-product of four-vectors in Minkowski spacetime is very interesting. The answer is actually given in "Lecture Notes on General Relativity" by Sean M. Carroll, on p 23. The generalization of the cross-product for spacetime could be viewed in terms of the Hodge dual of the wedge product. It gives the product that involves the Levi-Civita's symbol (not the metric tensor as per the answer above) $$ *(A \wedge B)_i=\epsilon_i^{jk} A_jB_k $$ Carroll concludes: "This is why the cross product only exists in three dimensions — because only in three dimensions do we have an interesting map from two dual vectors to a third dual vector. If you wanted to you could define a map from n − 1 one-forms to a single one-form, but I’m not sure it would be of any use."

PS. The situation is slightly different in pure math where we have Clifford algebra, Pin(p,q) and Spin(p,q) groups, and where vector norm can be $\pm 1$.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.