1
$\begingroup$

I am supposed to find out whether for a

  • scalar function $p$ and a
  • divergence-free vector function $\boldsymbol{u}$

we have that

$$\nabla\cdot\Big [\boldsymbol{u}(\nabla\cdot\nabla p) - \nabla (\boldsymbol{u}\cdot\nabla p)\Big]=0.$$

Approach 1: Gradient of dot product

I use https://proofwiki.org/wiki/Gradient_of_Dot_Product and try to show that the second term in the brackets is equal to the first:

$$\nabla (\boldsymbol{u}\cdot\nabla p) = (\boldsymbol{u}\cdot\nabla)\nabla p+(\nabla p\cdot\nabla)\boldsymbol{u} + \boldsymbol{u}\times (\nabla\times \nabla p) + \nabla p \times (\nabla\times \boldsymbol{u})$$

This, however, is an endless game, I dont see a clear end to where this goes.

Approach 2: BAC-CAB

I try to use https://mathworld.wolfram.com/BAC-CABIdentity.html to rewrite the expression inside the brackets: $$\boldsymbol{u}(\nabla\cdot\nabla p) - \nabla (\boldsymbol{u}\cdot\nabla p) = \nabla p\times (\nabla\times\boldsymbol{u}) $$

This is not necessarily $0$. However I am not sure that I am allowed to use the identity that way.

Approach 3: Gradient of Laplacian

In this question Does the Laplacian and gradient commute? I learned that $\nabla (\Delta p) = \Delta (\nabla p)$. I tried to use this on the first term. Further we know that the divergence-free property always allows to write $\nabla\cdot (\boldsymbol{u}p) = \boldsymbol{u}\cdot\nabla p$.

$$ \begin{align} \nabla\cdot\Big [\boldsymbol{u}(\nabla\cdot\nabla p) \Big ] &= \boldsymbol{u}\cdot\nabla(\nabla\cdot\nabla p)\\ &= \boldsymbol{u}\cdot\nabla(\Delta p)\\ &= \boldsymbol{u}\cdot \Delta (\nabla p) \end{align} $$

Can I know pull in the $\boldsymbol{u}$, like this: $$\boldsymbol{u}\cdot \Delta (\nabla p) \stackrel{?}{=} \Delta (\boldsymbol{u}\cdot \nabla p)$$

That would be great, as the second term of my problem is exactly this. However, I think I am cheating here. What does it mean to take the Laplacian of a Gradient?

When I try to show if the divergence free property allows this last move, i get $$\boldsymbol{u}\cdot \Delta (\nabla p) = \boldsymbol{u}\cdot[\nabla\cdot\nabla (\nabla p)]$$ but I can not make sense of the divergence of the gradient of a gradient. Any help is much appreciated.

$\endgroup$
7
  • $\begingroup$ Can you please write clearer your question title $\nabla\cdot (u\nabla(\nabla p))$? $\endgroup$
    – MathArt
    Commented Apr 18 at 17:51
  • $\begingroup$ Yes sorry, i tried to make it more fitting. Feel free to suggest a more suitable one, if you have one in mind. $\endgroup$ Commented Apr 18 at 17:57
  • $\begingroup$ I don't think they are equal. Left $\mathbf{u}(\nabla\cdot\nabla p)=u_i\partial_j\partial_jp=\mathbf{u}\nabla^2p$, whereas the right $\nabla (\boldsymbol{u}\cdot\nabla p)=\partial_i(u_j\partial_jp)=u_j\partial_{ij}p+\partial_jp\partial_i\partial_{ij}u=\mathbf{u}\cdot\nabla(\nabla p)+\nabla p\cdot\nabla\mathbf{u}$. $\endgroup$
    – MathArt
    Commented Apr 18 at 18:25
  • $\begingroup$ I do think you tried a lot. Maybe it is useful to see one post or my answer here math.stackexchange.com/questions/4771359. $\endgroup$
    – MathArt
    Commented Apr 18 at 18:29
  • $\begingroup$ are you sure that your second identity, i.e. $\partial_i(u_h\partial_jp) = ...$ is true? When I apply this, I get something completely different involving cross products. However, I am not very familiar with index notation and cant spot your mistake directly. $\endgroup$ Commented Apr 19 at 7:58

0

You must log in to answer this question.