I am supposed to find out whether for a
- scalar function $p$ and a
- divergence-free vector function $\boldsymbol{u}$
we have that
$$\nabla\cdot\Big [\boldsymbol{u}(\nabla\cdot\nabla p) - \nabla (\boldsymbol{u}\cdot\nabla p)\Big]=0.$$
Approach 1: Gradient of dot product
I use https://proofwiki.org/wiki/Gradient_of_Dot_Product and try to show that the second term in the brackets is equal to the first:
$$\nabla (\boldsymbol{u}\cdot\nabla p) = (\boldsymbol{u}\cdot\nabla)\nabla p+(\nabla p\cdot\nabla)\boldsymbol{u} + \boldsymbol{u}\times (\nabla\times \nabla p) + \nabla p \times (\nabla\times \boldsymbol{u})$$
This, however, is an endless game, I dont see a clear end to where this goes.
Approach 2: BAC-CAB
I try to use https://mathworld.wolfram.com/BAC-CABIdentity.html to rewrite the expression inside the brackets: $$\boldsymbol{u}(\nabla\cdot\nabla p) - \nabla (\boldsymbol{u}\cdot\nabla p) = \nabla p\times (\nabla\times\boldsymbol{u}) $$
This is not necessarily $0$. However I am not sure that I am allowed to use the identity that way.
Approach 3: Gradient of Laplacian
In this question Does the Laplacian and gradient commute? I learned that $\nabla (\Delta p) = \Delta (\nabla p)$. I tried to use this on the first term. Further we know that the divergence-free property always allows to write $\nabla\cdot (\boldsymbol{u}p) = \boldsymbol{u}\cdot\nabla p$.
$$ \begin{align} \nabla\cdot\Big [\boldsymbol{u}(\nabla\cdot\nabla p) \Big ] &= \boldsymbol{u}\cdot\nabla(\nabla\cdot\nabla p)\\ &= \boldsymbol{u}\cdot\nabla(\Delta p)\\ &= \boldsymbol{u}\cdot \Delta (\nabla p) \end{align} $$
Can I know pull in the $\boldsymbol{u}$, like this: $$\boldsymbol{u}\cdot \Delta (\nabla p) \stackrel{?}{=} \Delta (\boldsymbol{u}\cdot \nabla p)$$
That would be great, as the second term of my problem is exactly this. However, I think I am cheating here. What does it mean to take the Laplacian of a Gradient?
When I try to show if the divergence free property allows this last move, i get $$\boldsymbol{u}\cdot \Delta (\nabla p) = \boldsymbol{u}\cdot[\nabla\cdot\nabla (\nabla p)]$$ but I can not make sense of the divergence of the gradient of a gradient. Any help is much appreciated.