3
$\begingroup$

I read that Maxwell equations are covariant under Lorentz transformations, but I can't find a proof. Or at least a proof understandable by someone that doesn't know higher mathematics (please don't start writing hieroglyphics in tensor notation because I can't understand them: and note that because of that request this question is not a duplicate). I tried to do simple calculus finding what follow. Let's consider the two standard systems $S$ and $S'$ (in motion towards positive $x$) we use in relativity. Let's suppose in $S'$ Maxwell equations work: \begin{equation} \left\{ \begin{array}{l} \displaystyle{ \nabla' \cdot \mathbf{E}' = \frac{\rho'}{\varepsilon_0}}\\ \displaystyle{ \nabla' \cdot \mathbf{B}' = 0 }\\ \displaystyle{ \nabla' \times \mathbf{E}' = -\frac{\partial \mathbf{B}'}{\partial t'} }\\ \displaystyle{ \nabla' \times \mathbf{B}' = \mu_0 \mathbf{J}' + \varepsilon_0 \mu_0 \frac{\partial \mathbf{E}'}{\partial t'}} \end{array} \right. \end{equation} Then I expect that doing substitutions \begin{equation} \mathbf{E}' = (E_x , \gamma (E_y - v B_z) , \gamma (E_z + v B_y) \end{equation} \begin{equation} \mathbf{B}' = \left(B_x , \gamma \left(B_y + \frac{v}{c^2} E_z \right) , \gamma \left(B_z - \frac{v}{c^2} E_y \right) \right) \end{equation} \begin{equation} \rho'=\left(1-\frac{u_x v}{c^2} \right) \gamma \rho \end{equation} \begin{equation} \frac{\partial}{\partial x'} = \gamma \left( \frac{\partial}{\partial x } + \frac{v}{c^2} \frac{\partial}{\partial t } \right) \end{equation} \begin{equation} \frac{\partial}{\partial y'} = \frac{\partial}{\partial y} \end{equation} \begin{equation} \frac{\partial}{\partial z'} = \frac{\partial}{\partial z} \end{equation} \begin{equation} \frac{\partial}{\partial t'} = \gamma \left( \frac{\partial}{\partial t } + v \frac{\partial}{\partial x } \right) \end{equation} \begin{equation} u_x' = \frac{u_x - v}{1-\frac{u_x v}{c^2}} \end{equation} \begin{equation} u_y' = \frac{u_y}{\gamma \left( 1-\frac{u_x v}{c^2} \right)} \end{equation} \begin{equation} u_z' = \frac{u_z}{\gamma \left( 1-\frac{u_x v}{c^2} \right)} \end{equation} I should obtain \begin{equation} \left\{ \begin{array}{l} \displaystyle{ \nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}}\\ \displaystyle{ \nabla \cdot \mathbf{B} = 0 }\\ \displaystyle{ \nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} }\\ \displaystyle{ \nabla \times \mathbf{B} = \mu_0 \mathbf{J} + \varepsilon_0 \mu_0 \frac{\partial \mathbf{E}}{\partial t}} \end{array} \right. \end{equation} But if you do that you will see that this works only for transverse component ($y$ and $z$) of vector equations (3rd and 4th, the ones with curl). Let's focus, for example, on the simpler Maxwell equation, the 2nd one, that in $S '$ is: $\nabla ' \cdot \mathbf {B}' = 0$. Doing substitutions this transforms in $\nabla \cdot \mathbf {B} = 0 $ only if the $x$ component of $ \nabla \times \mathbf {E} + \frac {\partial \mathbf {B}}{\partial t} $ is zero. At first glance you could say "where is the problem? This works, because of 3rd Maxwell equations". The problem is that this is not primed! My starting hypothesis where the primed Maxwell equations: the not primed Maxwell equations are what I'm trying to proof. If in some way you can proof that the longitudinal component of primed 3rd Maxwell equation transform correctly in not primed one, then the proof would work, but if you try to transform $\nabla ' \times \mathbf {E} ' = -\frac {\partial \mathbf {B} '} {\partial t '} $ you will find $\nabla \times \mathbf {E} = -\frac {\partial \mathbf {B}}{\partial t} $ only if you suppose $\nabla \cdot \mathbf {B}=0$. In other words

  • primed 2nd equation became not primed 2nd equation only if longitudinal component of not primed 3rd equation works
  • longitudinal component of primed 3rd equation became longitudinal component of not primed 3rd equation only if not primed 2nd equation works

This impasse is frustrating. Doing calculus you can see that things go in a similarly whit the other two equations

  • primed 1st equation became not primed 1st equation only if longitudinal component of not primed 4th equation works
  • longitudinal component of primed 4th equation became longitudinal component of not primed 4th equation only if 1st equation works

Even more compactly I can write (here numbers refers to Maxwell equations in the order I used above)

  • $2'$ to $2$ if $3x$
  • $3x'$ to $3x$ if $2$
  • $1'$ to $1$ if $4x$
  • $4x'$ to $4x$ if $1$

In this way you can see at glance that we are in a no exit street! I could add

  • $3y'$ to $3y$ with no problems
  • $3z'$ to $3z$ with no problems
  • $4y'$ to $4y$ with no problems
  • $4z'$ to $4z$ with no problems

Maybe I wouldn't have problems using potential formulation of Maxwell equations? (it doesn't look a prohibitive difficulty, like tensor approach, but I didn't try this way) Anyway reading Resnick "Introduzione alla relatività ristretta" make me think that field formulation should be good for this proof, but he does calculus explicitly only for the $y$ component of $\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t}$, which is one of the special cases in which this proof works! I can't believe that to prove the invariance we need a change in formalism, surely it is possible reach the goal with some cunning trickery I can't see. But which one?

$\endgroup$
4
  • 1
    $\begingroup$ If you want to see a proof of this without tensor notation, then Einstein's 1905 paper on special relativity has it. $\endgroup$
    – user4552
    Commented Apr 26, 2019 at 18:53
  • $\begingroup$ Why would anyone want to do this without covariant notation? $\endgroup$
    – my2cts
    Commented May 13, 2019 at 19:23
  • 1
    $\begingroup$ @my2cts a) Because this notation is simpler and therefore understandable even by those who do not know tensor calculus. b) Because even physicists, not only mathematicians, can find fun (and in some cases useful too) devising alternative proof. $\endgroup$ Commented May 14, 2019 at 16:20
  • $\begingroup$ E and B together form a tensor. They are not vectors. You are doing tensor calculus, in a very convenient way. $\endgroup$
    – my2cts
    Commented Jan 15 at 10:11

3 Answers 3

4
$\begingroup$

It seems that you've actually proven invariance: Given that the full set of Maxwell's equations are true in one frame, they're true in another frame.

But you don't like that, because you want somehow to do that independently for one of the equations at a time. That's not possible, because it's only the set of Maxwell's equations that together have the invariance property. That's the Really Deep Point of all this: Coulomb's law, or Faraday's law or any other part by itself is a frame-dependent thing. Only with Maxwell's unification (including the displacement term) do you get a unified, invariant set. And that invariance requires special relativity, not Newtonian relativity.

$\endgroup$
4
  • $\begingroup$ The first paragraph seems to be confusing form invariance under gauge transformations with form invariance under Poincaire transformations. $\endgroup$
    – G. Smith
    Commented Apr 26, 2019 at 20:30
  • $\begingroup$ I don't understand the beginning of your answer: I didn't proof the invariance (this is my problem!). I don't understand the end too of your answer: I don’t use galilean transformations but Lorentz ones. Even less I understand the criticism "you want somehow to do that independently for one of the equations at a time". This is completely wrong, I considered Maxwell equations all together, that’s why I put them into a system and in attempts to proof the invariance of one I tried to exploit others (the fact the transverse components transform alone is accidental). $\endgroup$ Commented Apr 26, 2019 at 20:57
  • $\begingroup$ So what’s the problem? You can show them all to be invariant so long as they’re all invariant. That’s a fine proof. $\endgroup$ Commented Apr 26, 2019 at 22:20
  • $\begingroup$ @BobJacobsen My problem is that proceeding by making substitutions and simplifying (even exploiting other equations) doesn't work. Four of the 8 equations (1,2,3x,3y,3z,4x,4y,4z) transform alone, and I was hoping this was a start point to transform others. But to transform others in not primed ones we don't need 3y, 3z, 4y or 4z (this would have been nice!) we need other not primed equation 1, 2, 3x or 4x, but I can't exploit them because my starting hypothesis were the primed ones. $\endgroup$ Commented Apr 27, 2019 at 8:30
3
$\begingroup$

I found a way to solve the problem: let's start by writing explicitly the 8 equations \begin{equation} \left\{ \begin{array}{l} \displaystyle{(a) \qquad \frac{\partial E'_x}{\partial x'} + \frac{\partial E'_y}{\partial y'} + \frac{\partial E'_z}{\partial z'} = \frac{\rho'}{\epsilon_0} }\\ \displaystyle{(b) \qquad \frac{\partial B'_x}{\partial x'} + \frac{\partial B'_y}{\partial y'} + \frac{\partial B'_z}{\partial z'} = 0 \qquad }\\ \displaystyle{(c) \qquad \frac{\partial E'_z}{\partial y'} - \frac{\partial E'_y}{\partial z'} = - \frac{\partial B'_x}{\partial t'} }\\ \displaystyle{(d) \qquad \frac{\partial E'_x}{\partial z'} - \frac{\partial E'_z}{\partial x'} = - \frac{\partial B'_y}{\partial t'} }\\ \displaystyle{(e) \qquad \frac{\partial E'_y}{\partial x'} - \frac{\partial E'_x}{\partial y'} = - \frac{\partial B'_z}{\partial t'} }\\ \displaystyle{(f) \qquad \frac{\partial B'_z}{\partial y'} - \frac{\partial B'_y}{\partial z'} = \mu_0 j'_x + \epsilon_0 \mu_0 \frac{\partial E'_x}{\partial t'} }\\ \displaystyle{(g) \qquad \frac{\partial B'_x}{\partial z'} - \frac{\partial B'_z}{\partial x'} = \mu_0 j'_y + \epsilon_0 \mu_0 \frac{\partial E'_y}{\partial t'} }\\ \displaystyle{(h) \qquad \frac{\partial B'_y}{\partial x'} - \frac{\partial B'_x}{\partial y'} = \mu_0 j'_z + \epsilon_0 \mu_0 \frac{\partial E'_z}{\partial t'} } \end{array} \right. \end{equation} Doing the substitutions of the answer we can rearrange in this way (I write $[\mathbf{A}]_x$ for $A_x$) \begin{equation} \left\{ \begin{array}{l} \displaystyle{(a) \qquad \nabla \cdot \mathbf{E} - \frac{\rho}{\epsilon_0} = v \left[ \nabla \times \mathbf{B} - \mu_0 \mathbf{j} - \epsilon_0 \mu_0 \frac{\partial \mathbf{E}}{\partial t} \right]_x}\\ \displaystyle{(b) \qquad \nabla \cdot \mathbf{B} = - \frac{v}{c^2} \left[ \nabla \times \mathbf{E} + \frac{\partial \mathbf{B}}{\partial t} \right]_x }\\ \displaystyle{(c) \qquad \left[ \nabla \times \mathbf{E} + \frac{\partial \mathbf{B}}{\partial t} \right]_x = - v \nabla \cdot \mathbf{B} }\\ \displaystyle{(d) \qquad \frac{\partial E_x}{\partial z} - \frac{\partial E_z}{\partial x} = - \frac{\partial B_y}{\partial t} }\\ \displaystyle{(e) \qquad \frac{\partial E_y}{\partial x} - \frac{\partial E_x}{\partial y} = - \frac{\partial B_z}{\partial t} }\\ \displaystyle{(f) \qquad \left[ \nabla \times \mathbf{B} - \mu_0 \mathbf{j} - \epsilon_0 \mu_0 \frac{\partial \mathbf{E}}{\partial t} \right]_x = \frac{v}{c^2} \left( \nabla \cdot \mathbf{E} - \frac{\rho}{\epsilon_0} \right) }\\ \displaystyle{(g) \qquad \frac{\partial B_x}{\partial z} - \frac{\partial B_z}{\partial x} = \mu_0 j_y + \epsilon_0 \mu_0 \frac{\partial E_y}{\partial t} }\\ \displaystyle{(h) \qquad \frac{\partial B_y}{\partial x} - \frac{\partial B_x}{\partial y} = \mu_0 j_z + \epsilon_0 \mu_0 \frac{\partial E_z}{\partial t} } \end{array} \right. \end{equation} We see that (d), (e), (g) and (h) transform in not primed equation automatically (that's why in the answer I wrote that $3y'$, $3z'$, $4y'$ and $4z'$ transform in $3y$, $3z$, $4y$ and $4z$ with no problems), while I arranged other four equations in a convenient way: the problem was to proof the invariance of these ones. But after having written the system in the way above, the proof is simple:

  • multiplying (c) by $-\frac{v}{c^2}$ and summing to (b) we get $\nabla \cdot \mathbf{B} = 0$ (i.e. not primed (b) of the first system), which with (b) gives $\frac{\partial E_z}{\partial y} - \frac{\partial E_y}{\partial z} = - \frac{\partial B_x}{\partial t} $ (i.e. not primed (c) of the first system)

  • multiplying (f) by $v$ and summing to (a) we get $\nabla \cdot \mathbf{E} = \frac{\rho}{\varepsilon_0}$ (i.e. not primed (a) of the first system), which with (a) gives $ \frac{\partial B_z}{\partial y} - \frac{\partial B_y}{\partial z} = \mu_0 j_x + \epsilon_0 \mu_0 \frac{\partial E_x}{\partial t} $ (i.e. not primed (f) of the first system)

The proof is ended. If I had found this proof in an electrodynamics book I would have saved a lot of time. I'm sorry this interesting proof is not in literature (not everyone can handle tensors). The Resnick book shows how to transform (d) but it is only the simplest case.

$\endgroup$
0
$\begingroup$

As noted in another reply, the name of the game is to ensure that the equations governing dynamics transform to/or from equations that satisfy the dynamics; and in this way show that the dynamics, itself, is preserved under transform. You do need to make the assumption at one end of the transform or the other. It doesn't matter which end, since the transforms are invertible.

First: generalize the transforms to the following: $$ ∇' = \left(γ \left(\frac{∂}{∂x} + α u \frac{∂}{∂t}\right), \frac{∂}{∂y}, \frac{∂}{∂z}\right),\quad \frac{∂}{∂t}' = γ \left(\frac{∂}{∂t} + β u \frac{∂}{∂x}\right),\\ 𝐁' = \left(B_x, γ (B_y + α u E_z), γ (B_z - α u E_y)\right),\quad 𝐄' = \left(E_x, γ (E_y - β u B_z), γ (E_z + β u B_y)\right),\\ 𝐉' = \left(γ (J_x - β u ρ), J_y, J_z\right),\quad ρ' = γ (ρ - α u J_x), $$ and pose the equations as: $$ ∇'·𝐄' = \frac{ρ'}{ε_0},\quad ∇'×𝐁' = μ_0 𝐉' + ε_0 μ_0 \frac{∂𝐄'}{∂t'}\\ ∇'·𝐁' = 0,\quad ∇'×𝐄' = -\frac{∂𝐁'}{∂t'} $$ where $$ γ = \frac1{\sqrt{1 - α β u^2}},\quad β ε_0 μ_0 = α,\quad α β ≥ 0. $$

Second, consider the infinitesimal forms of these transforms: $$ δ(∇) = α υ \left(\frac{∂}{∂t}, 0, 0\right) = α 𝛖 \frac{∂}{∂t},\quad δ\left(\frac{∂}{∂t}\right) = β υ \frac{∂}{∂x} = β 𝛖·∇,\\ δ(𝐁) = α υ \left(0, +E_z, -E_y\right) = -α 𝛖×𝐄,\quad δ(𝐄) = β υ \left(0, -B_z, +B_y\right) = +β 𝛖×𝐁,\\ δ(𝐉) = (-β u ρ, 0, 0) = -β 𝛖 ρ,\quad δ(ρ) = -α u J_x = -α 𝛖·𝐉, $$ where 𝛖 = (υ, 0, 0). This can be generalized further to arbitrary 𝛖, independent of direction to just: $$ δ(∇) = α 𝛖 \frac{∂}{∂t},\quad δ\left(\frac{∂}{∂t}\right) = β 𝛖·∇,\\ δ(𝐁) = -α 𝛖×𝐄,\quad δ(𝐄) = +β 𝛖×𝐁,\\ δ(𝐉) = -β 𝛖 ρ,\quad δ(ρ) = -α 𝛖·𝐉, $$

It's not necessary to deal with the finite forms of the transforms, since they can already be obtained by "exponentiating": $$ X' = \exp(δ) X = X + δX + \frac{δ^2X}{2!} + \frac{δ^3X}{3!} + ⋯. $$ For instance (reverting back to $𝛖 = (υ, 0, 0)$): $$ δ^2\left(\frac{∂}{∂t}\right) = β δ\left(υ \frac{∂}{∂x}\right) = α β υ^2 \frac{∂}{∂t},\\ δ^2\left(\frac{∂}{∂x}\right) = α υ δ\left(\frac{∂}{∂t}\right) = α β υ^2 \frac{∂}{∂x},\\ δ\left(\frac{∂}{∂y}\right) = 0,\quad ∂\left(\frac{∂}{∂z}\right) = 0. $$ Thus: $$\begin{align} \exp(δ) \frac{∂}{∂t} &= \left(1 + \frac{α β υ^2}{2!} + \frac{(α β)^2 υ^4}{4!} + ⋯\right) \frac{∂}{∂t} + \left(υ + \frac{α β υ^3}{3!} + \frac{(α β)^2 υ^5}{5!} + ⋯\right) β \frac{∂}{∂x}\\ &= \cosh(\sqrt{α β} υ) \frac{∂}{∂t} + \sinh(\sqrt{α β} υ) β \frac{∂}{∂x}\\ &= γ \left(\frac{∂}{∂t} + β u \frac{∂}{∂x}\right),\\ \exp(δ) \frac{∂}{∂x} &= \left(1 + \frac{α β υ^2}{2!} + \frac{(α β)^2 υ^4}{4!} + ⋯\right) \frac{∂}{∂x} + \left(υ + \frac{α β υ^3}{3!} + \frac{(α β)^2 υ^5}{5!} + ⋯\right) α \frac{∂}{∂t}\\ &= \cosh(\sqrt{α β} υ) \frac{∂}{∂x} + \sinh(\sqrt{α β} υ) α \frac{∂}{∂t}\\ &= γ \left(\frac{∂}{∂x} + α u \frac{∂}{∂t}\right),\\ \exp(δ) \frac{∂}{∂y} &= \frac{∂}{∂y},\\ \exp(δ) \frac{∂}{∂z} &= \frac{∂}{∂z}, \end{align}$$ where $$ γ = \cosh(\sqrt{α β} υ),\quad u = \tanh(\sqrt{α β} υ)\quad⇒\quad γ = \frac1{\sqrt{1 - α β u^2}} $$ The finite forms for the other transforms may be similarly obtained from their infinitesimal forms by exponentiating them. Therefore, it suffices to just consider the infinitesimal forms of the transforms.

For the scalar equations, one obtains: $$\begin{align} δ(∇·𝐄 - ρ) &= δ(∇)·𝐄 + ∇·δ(𝐄) - δρ\\ &= \left(α 𝛖 \frac{∂}{∂t}\right)·𝐄 + ∇·(+β 𝛖×𝐁) + α 𝛖·𝐉\\ &= 𝛖 · \left(α \frac{∂}{∂t} 𝐄 - β ∇×𝐁 + α 𝐉\right)\\ &= α 𝛖 · \left(\frac{∂𝐄}{∂t} - ε_0 μ_0 ∇×𝐁 + 𝐉\right) \end{align}$$ after using some vector algebra and vector calculus $$\begin{align} ∇·(β 𝛖×𝐁) &= β ∇·𝛖×𝐁\\ &= -β 𝛖·∇×𝐁 \end{align}$$ Similarly, $$\begin{align} δ(∇·𝐁) &= δ(∇)·𝐁 + ∇·δ(𝐁)\\ &= \left(α 𝛖 \frac{∂}{∂t}\right) · 𝐁 + ∇·(-α 𝛖×𝐄)\\ &= α 𝛖 · \left(\frac{∂𝐁}{∂t} + ∇×𝐄\right) \end{align}$$

For the vector equations, one obtains: $$\begin{align} δ\left(∇×𝐁 - ε_0 μ_0 \frac{∂𝐄}{∂t} - μ_0 𝐉\right) &= δ(∇)×𝐁 + ∇×δ(𝐁) - ε_0 μ_0 \left(δ\left(\frac{∂}{∂t}\right) 𝐄 + \frac{∂}{∂t} (δ𝐄)\right) - μ_0 δ(𝐉)\\ &= \left(α 𝛖 \frac{∂}{∂t}\right)×𝐁 + ∇ × (-α 𝛖×𝐄)\\ &- ε_0 μ_0 \left((β 𝛖·∇) 𝐄 + \frac{∂}{∂t} (+β 𝛖×𝐁)\right) - μ_0 (-β 𝛖 ρ)\\ &= α 𝛖 × \frac{∂𝐁}{∂t} - α ∇×(𝛖×𝐄) - α 𝛖·∇ 𝐄 - α \frac{∂}{∂t}(𝛖×𝐁) + α \frac{𝛖 ρ}{ε_0}\\ &= -α 𝛖 \left(∇·𝐄 - \frac{ρ}{ε_0}\right) \end{align}$$ after using some vector calculus and vector algebra: $$\begin{align} α 𝛖 × \frac{∂𝐁}{∂t} - α \frac{∂}{∂t}(𝛖×𝐁) &= α 𝛖 × \frac{∂𝐁}{∂t} - α 𝛖×\frac{∂𝐁}{∂t} = 𝟬,\\ - α ∇×(𝛖×𝐄) - α 𝛖·∇ 𝐄 &= α (𝛖·∇𝐄 - 𝛖∇·𝐄) - α 𝛖·∇ 𝐄\\ &= -α𝛖 (∇·𝐄) \end{align}$$ Similarly, $$\begin{align} δ\left(∇×𝐄 + \frac{∂𝐁}{∂t}\right) &= δ(∇)×𝐄 + ∇×δ(𝐄) + \left(δ\left(\frac{∂}{∂t}\right) 𝐁 + \frac{∂}{∂t} (δ𝐁)\right)\\ &= \left(α 𝛖 \frac{∂}{∂t}\right) × 𝐄 + ∇×(+β 𝛖×𝐁) + \left((β 𝛖·∇) 𝐁 + \frac{∂}{∂t} (-α 𝛖×𝐄)\right)\\ &= α 𝛖 × \frac{∂𝐄}{∂t} + β ∇×(𝛖×𝐁) + β 𝛖·∇ 𝐁 - α \frac{∂}{∂t}(𝛖×𝐄)\\ &= +β (∇·𝐁) \end{align}$$

Now, use "≡" to denote equality, subject to the field law ... or "on-shell" equality. Identities that apply, irrespective of the field equations - like those just used - are "off-shell" and are denoted "=". Then: $$ ∇·𝐄 ≡ \frac{ρ}{ε_0},\quad ∇×𝐁 - ε_0 μ_0 \frac{∂𝐄}{∂t} ≡ μ_0 𝐉,\\ ∇·𝐁 ≡ 0,\quad ∇×𝐄 + \frac{∂𝐁}{∂t} ≡ 𝟬, $$ or just: $$ ∇·𝐄 - \frac{ρ}{ε_0} ≡ 0,\quad ∇×𝐁 - ε_0 μ_0 \frac{∂𝐄}{∂t} - μ_0 𝐉 ≡ 𝟬,\\ ∇·𝐁 ≡ 0,\quad ∇×𝐄 + \frac{∂𝐁}{∂t} ≡ 𝟬. $$

The condition for covariance is that on-shell equalities transform to on-shell equalities: if $A ≡ B$, then $δA ≡ δB$. This is enough to ensure that, $δ^2A ≡ δ^2B$, $δ^3A ≡ δ^3B$, and so on, since the infinitesimal transforms are all linear in $A$ and $B$. Ultimately, therefore, $A' = \exp(δ) A ≡ \exp(δ) B = B'$. In this case, we have: $$ δ(∇·𝐄 - ρ) = α 𝛖 · \left(\frac{∂𝐄}{∂t} - ε_0 μ_0 ∇×𝐁 + 𝐉\right) ≡ 0,\\ δ(∇·𝐁) = α 𝛖 · \left(\frac{∂𝐁}{∂t} + ∇×𝐄\right) ≡ 0,\\ δ\left(∇×𝐁 - ε_0 μ_0 \frac{∂𝐄}{∂t} - μ_0 𝐉\right) = -α 𝛖 \left(∇·𝐄 - \frac{ρ}{ε_0}\right) ≡ 𝟬,\\ δ\left(∇×𝐄 + \frac{∂𝐁}{∂t}\right) = +β (∇·𝐁) ≡ 0. $$

So, yes, you do have to assume the on-shell condition for one or the other set of equations, because all you're doing is showing that equations governing dynamics (and kinematics) transform to equations satisfied by the dynamics. The proof that the transform (in either direction) is on-shell is therefore subject to the on-shell assumption. It's easiest to prove everything with infinitesimal transforms, instead of transforms in finite form, because the infinitesimal transforms are linear.

More generally, "on-shell" means "subject to the equations governing the dynamics (and kinematics)". A similar set of transforms apply, in relativity, to the total energy $E$ and momentum $𝐩$ of a body: $$δE = -β 𝛖·𝐩,\quad δ𝐩 = -α 𝛖 E.$$ Then, it follows, assuming $δm = 0$, that: $$\begin{align} δ\left(α^2 E^2 - α β p^2 - β^2 m^2\right) &= 2 α^2 E δE - 2 α β 𝐩·δ𝐩 - 2 β^2 m δm\\ &= 2 \left(α^2 E (-β 𝛖·𝐩) - α β 𝐩·(-α 𝛖 E) - β^2 m (0)\right)\\ &= 2 \left(-β α^2 E 𝛖·𝐩 + α^2 β 𝐩·𝛖 E\right)\\ &= 0 \end{align}$$ Thus, if $α^2 E^2 - α β p^2 ≡ β^2 m^2$, i.e. $α^2 E^2 - α β p^2 - β^2 m^2 ≡ 0$, then $$δ\left(α^2 E^2 - α β p^2 - β^2 m^2\right) = 0 ≡ 0.$$ The transform, in this case, actually holds off-shell. This is the "mass-shell" for a body with energy $E$, momentum $𝐩$, moving under light speed $c$, with rest-mass $m$, when $α = 1/c^2$, $β = 1$: the origin of the term "on-shell". The "shell", here, is the plot of $E^2/c^4 - p^2/c^2 - m^2 ≡ 0$ for $(E, 𝐩)$, for a given rest mass $m ≠ 0$, which is a two-shell'ed hyperboloid: one shell for $E ≥ +m c^2$, the other shell for $E ≤ -m c^2$.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.