0
$\begingroup$

So I’m currently trying to formulate Maxwell’s equations in dimensions in other than 3 in order to improve my understanding of electromagnetism. In 3D, Maxwell’s equations can be described by $$\begin{matrix}\nabla\cdot\vec{D}=\rho&\nabla\times\vec{E}=-\left(\frac{\partial\vec{B}}{\partial t}\right)\\\nabla\cdot\vec{B}=0&\nabla\times\vec{H}=\left(\frac{\partial\vec{D}}{\partial t}\right)+\vec{J}\\\end{matrix}$$ According to my calculations, in other dimensions Maxwell’s equations are $$\begin{matrix}\begin{matrix}\nabla\cdot\vec{D}=\rho&\nabla\vec{E}-\left(\nabla\vec{E}\right)^T=-\left(\frac{\partial\widetilde{B}}{\partial t}\right)\\\end{matrix}\\\nabla\cdot\widetilde{H}=\left(\frac{\partial\vec{D}}{\partial t}\right)+\vec{J}\\\end{matrix}$$ Where the magnetic fields $\widetilde{B}$ and $\widetilde{H}$ are both tensor fields which are skew symmetric and are related to their typical representations in 3D by $$\begin{matrix}\widetilde{B}=\left[\begin{matrix}0&-B_z&B_y\\B_z&0&-B_x\\-B_y&B_x&0\\\end{matrix}\right]&\widetilde{H}=\left[\begin{matrix}0&-H_z&H_y\\H_z&0&-H_x\\-H_y&H_x&0\\\end{matrix}\right]\\\end{matrix}$$ $$Tensor\ Divergence:\ \nabla\cdot\widetilde{A}=\left[\begin{matrix}\left(\frac{\partial A_{11}}{\partial x}\right)+\left(\frac{\partial A_{21}}{\partial y}\right)+\left(\frac{\partial A_{31}}{\partial z}\right)\\\left(\frac{\partial A_{12}}{\partial x}\right)+\left(\frac{\partial A_{22}}{\partial y}\right)+\left(\frac{\partial A_{32}}{\partial z}\right)\\\left(\frac{\partial A_{13}}{\partial x}\right)+\left(\frac{\partial A_{23}}{\partial y}\right)+\left(\frac{\partial A_{33}}{\partial z}\right)\\\end{matrix}\right]$$ $$Vector\ Gradient:\ \nabla\vec{A}=\left[\begin{matrix}\left(\frac{\partial A_x}{\partial x}\right)&\left(\frac{\partial A_x}{\partial y}\right)&\left(\frac{\partial A_x}{\partial z}\right)\\\left(\frac{\partial A_y}{\partial x}\right)&\left(\frac{\partial A_y}{\partial y}\right)&\left(\frac{\partial A_y}{\partial z}\right)\\\left(\frac{\partial A_z}{\partial x}\right)&\left(\frac{\partial A_z}{\partial y}\right)&\left(\frac{\partial A_z}{\partial z}\right)\\\end{matrix}\right]$$ So in n-dimensions, the magnetic field has $\frac{n^2-n}{2}$ components. In 3D, the Lorentz force density is $$\vec{F}=\rho\vec{E}-\vec{B}\times\vec{J}$$ Whereas the multidimensional variant is $$\vec{F}=\rho\vec{E}-\widetilde{B}\vec{J}=\rho\vec{E}-\left[\begin{matrix}0&-B_z&B_y\\B_z&0&-B_x\\-B_y&B_x&0\\\end{matrix}\right]\left[\begin{matrix}J_x\\J_y\\J_z\\\end{matrix}\right]$$ $\vec{D}=\widetilde{\varepsilon}\vec{E}$ still holds but the same isn’t true for $\vec{B}=\widetilde{\mu}\vec{H}$. As you can see, $$\widetilde{B}=\left[\begin{matrix}0&-B_z&B_y\\B_z&0&-B_x\\-B_y&B_x&0\\\end{matrix}\right]=\left[\begin{matrix}0&-\left(\mu_{xz}H_x+\mu_{yz}H_y+\mu_{zz}H_z\right)&\mu_{xy}H_x+\mu_{yy}H_y+\mu_{yz}H_z\\\mu_{xz}H_x+\mu_{yz}H_y+\mu_{zz}H_z&0&-\left(\mu_{xx}H_x+\mu_{xy}H_y+\mu_{xz}H_z\right)\\-\left(\mu_{xy}H_x+\mu_{yy}H_y+\mu_{yz}H_z\right)&\mu_{xx}H_x+\mu_{xy}H_y+\mu_{xz}H_z&0\\\end{matrix}\right]=\widetilde{?}\left[\begin{matrix}0&-H_z&H_y\\H_z&0&-H_x\\-H_y&H_x&0\\\end{matrix}\right]$$ we can’t get the 3D results from mere matrix multiplication. Some other relation is required. I haven’t been able to figure it out yet and resources on this are exceedingly scarce. Since the magnetic field in 2D is a scalar, I think the magnetic permeability in 2D may also be a scalar. In 3D, the magnetic permeability has 6 components. I can’t really see a pattern here and I somewhat doubt the magnetic permeability may actually be a tensor. Any help finding a general formula for getting $\vec{B}$ from $\vec{H}$ is appreciated.

$\endgroup$
2

1 Answer 1

4
$\begingroup$

You already have $\tilde B_{ij} = \epsilon_{ijk}B^k \iff B^k = \frac{1}{2}\epsilon^{ijk}\tilde B_{ij}$ and similarly for $H$ and $\tilde H$. If the pseudovector components are related via $B^i = \mu^i_{\ \ j} H^j$, then the tensor components are related via

$$\tilde B_{ij} = \epsilon_{ijk} B^k = \epsilon_{ijk} \mu^k_{\ \ \ell} H^\ell = \frac{1}{2}\epsilon_{ijk} \mu^k_{\ \ \ell} \epsilon^{\ell m n} \tilde H_{mn}$$ $$\implies B_{ij} = \tilde \mu_{ij}^{\ \ mn} H_{mn}$$ where $$\tilde \mu_{ij}^{\ \ mn}\equiv \frac{1}{2}\epsilon_{ijk} \mu^k_{\ \ \ell} \epsilon^{\ell m n}$$ Note that $\tilde \mu$ is antisymmetric in each pair of indices, and therefore has nine independent components - just as one would expect, since $\mu$ is a generic $3\times 3$ matrix. Generalizing to $n$-dimensions, $\tilde \mu$ would have $d^2(d-1)^2/4$ independent components, and would express a general proportional relationship between two $(0,2)$-tensor fields.

It's also important to note that the $B\leftrightarrow \tilde B$ correspondence only holds in $3$ dimensions; in higher (resp. lower) dimensions, the magnetic field $\tilde B$ has too many (resp. too few) components to be represented as a (pseudo)vector. In the same way, the correspondence between the $(2,2)$-tensor $\tilde \mu$ and a $(1,1)$-tensor $\mu$ is also a feature of $3$ dimensions only. The magnetic field is really a $(0,2)$-tensor $\tilde B$ which can be associated to a pseudovector $B$ in the special case that $d=3$; similarly the magnetic susceptibility is really a $(2,2)$-tensor $\tilde \mu$ which can be associated to a $(1,1)$-tensor $\mu$ only when $d=3$.


To further clarify the summation convention, consider the following example. The component $\tilde \mu_{12}^{\ \ \ 12}$ is

$$\tilde \mu_{12}^{\ \ \ 12} = \sum_{\ell=1}^3 \sum_{k=1}^3\frac{1}{2}\epsilon_{12k} \mu^k_{\ \ \ell} \epsilon^{\ell 12}$$ Since $\epsilon$ is completely antisymmetric, there is only one nonzero term, namely $k=3$ (otherwise $\epsilon_{12k}=0$) and $\ell=3$ (otherwise $\epsilon^{\ell 12}=0$). Therefore,

$$\tilde \mu_{12}^{\ \ \ 12} = \frac{1}{2}\underbrace{\epsilon_{123}}_{=1} \ \mu^3_{\ \ 3}\ \underbrace{\epsilon^{312}}_{=1} = \frac{1}{2} \mu^3_{\ \ 3}$$

$\endgroup$
9
  • $\begingroup$ Is it possible you could rewrite your answer or add some clarification? I'm not well versed with the index raising and lowering notation. If you could rewrite it in a matrix multiplication form, explain how it works in more detail, or link to a sufficiently thorough explanation of it then that'd be great.Also, usually μ and H are multiplied together to get B but you seem to have done the opposite. If it works that's fine, I mainly needed the relation between them.From what I know about raising and lowering indices, it involves the Minkowski metric.Since the B tensor is 3x3, how does that work? $\endgroup$
    – Laff70
    Commented Apr 16, 2021 at 19:38
  • $\begingroup$ @Laff70 Whoops - you're right, it should be $B=\mu H$, so I've corrected that. There is no index raising or lowering to be done in my answer- you can consider $\epsilon^{ijk}$ and $\epsilon_{ijk}$ to be exactly the same thing, or you could imagine that indices are raised/lowered with the kronecker delta $\delta_{ij}$. The summation convention is that repeated indices are summed over. I cannot write it in matrix form because $\tilde \mu$ and $\epsilon$ have four and three indices, respectively, and therefore cannot be represented as $2$-dimensional arrays of numbers. $\endgroup$
    – J. Murray
    Commented Apr 16, 2021 at 19:45
  • $\begingroup$ @Laff70 I've updated my answer with an explicit example of how to compute the components of $\tilde \mu$. $\endgroup$
    – J. Murray
    Commented Apr 16, 2021 at 19:55
  • $\begingroup$ It's worth noting that the "natural" analog of the Levi-Civita symbol in $n$-dimensional space has $n$ indices, not 3. So while the statement that we should have $B_{ij} = \tilde \mu_{ij}^{\ \ mn} H_{mn}$ in all dimensions is true, your definition of it in terms of $\epsilon_{ijk}$ and a 2-tensor $\mu_{ij}$ and really only works in 3 spatial dimensions. $\endgroup$ Commented Apr 16, 2021 at 20:09
  • 1
    $\begingroup$ @Laff70 Knock yourself out. I suspect you'll find, however, that Einstein summation is in universal use for good reason. In its most general form, a linear relationship between two matrices cannot be expressed via multplication by a third matrix, in the following sense: If $B=\mu H$ (all matrices), then the elements of the first column of $B$ are linear combinations of the elements of the first column of $H$. In other words, if $\mu$ is merely a matrix, then the first column of $B$ can't "see" the third column of $H$. If you want each element of $B$ to be a linear combination [...] $\endgroup$
    – J. Murray
    Commented Apr 16, 2021 at 20:48

Not the answer you're looking for? Browse other questions tagged or ask your own question.