4
$\begingroup$

I'm implementing mesh skinning in a project of mine. I can calculate vertex positions based on bone transformations, but I've run into a problem with calculating their normals.

From the resources I've read, it appears that a given vertex's skinned normal is the weighted sum of each influencing bone's transform applied to the vertex's normal.

What I don't understand, is how this would produce equivalent normals to what you would get if you calculated each vertex's normal directly. (I.E. from the sum of the normals of its adjacent faces)

For example, consider the case of three connected vertices with the normals shown below. Note the bone in the center, which has the same position as the center vertex.

A simple rigged mesh

Next, suppose this bone is turned 45 degrees CCW. If this bone only influenced the vertex to its right, my understanding is that it would produce the following result:

Single influenced vertex

This is because there is only one influencing bone, and the normal is rotated by the same rotation as the bone, 45 degrees CCW.

Next, consider that the bone influenced the middle and right vertices. Each vertex's normal would be rotated exactly as in the previous image.

Two influenced vertices

But, neither of these would produce the "correct" normal for the middle vertex, shown here:

Presumably the correct middle normal

So, my question is, is this the intended behavior for this algorithm? Is it only meant to be an approximation, or have I missed something critical? How are these normals meant to be calculated?

For what it's worth, the resources I've been following are presented here and here.

$\endgroup$
2
  • $\begingroup$ what if the vertex used both matrices (blended) to calculate the normal $\endgroup$ Commented Feb 3, 2017 at 8:35
  • $\begingroup$ Yes it's an approximation, see "Accurate and Efficient Lighting for Skinned Models" vcg.isti.cnr.it/deformFactors if you'd like to implement a more accurate version (they provide code) $\endgroup$
    – arkan
    Commented Apr 11, 2021 at 2:38

1 Answer 1

3
$\begingroup$

You're right, standard skinning pipelines don't produce correct normals. You get similar problems when using translation on joints, which is very common in facial animation. The only way to get a correct result is to take the average of the surrounding face normals - this is not information you will be able to get in the vertex shader, so you'll have to use a geometry shader or run your skinning on CPU or a compute shader.

$\endgroup$
1
  • $\begingroup$ For sake of OpenGL 2.X, I'll probably stick with CPU skinning. Good to hear I'm not misunderstanding things. $\endgroup$
    – lowq
    Commented Feb 7, 2017 at 3:24

Not the answer you're looking for? Browse other questions tagged or ask your own question.