1
$\begingroup$

I am aware of how to get normal vector/color data for a matcap in a material.

Example of matcap normal data:

But how can that vector/color data be obtained within Geometry Nodes from the input geometry?

I can see that the Normal node does give some kind of normal data, but it doesn't look the same.

$\endgroup$
1
  • 2
    $\begingroup$ The matcap appears in camera space.(it changes with the view) Is that what you want? Or do you want the normal in tangent space, fixed to the surface, as it would be when baked into the UV? How do you want to use this data? $\endgroup$
    – Robin Betts
    Commented Dec 26, 2022 at 9:05

1 Answer 1

2
$\begingroup$

If we want to display the normals of an object like they are displayed in Blender-- possibly with a matcap, but possibly with a few other different methods-- the main thing we need to do is map the normals from their existing -1,1 range to the visible 0,1 range:

enter image description here

Here, I'm using geometry nodes to create an attribute representing our remapped, object space normals. We can see on the Suzanne on the left, that isn't what you expect, but on the Suzanne on the right, who has been rotated so that her Z axis points roughly toward our view, that's what you expect.

So with the matcap, what we're actually interested in is our camera space normals. To do that, we need to know what our camera space is. The easiest way to do that (not the only way, but the easiest) is to simply instance our geometry from an object that copies the transform of the camera:

enter image description here

Here, I have a different mesh object, copying transforms from the camera via a constraint, and then instancing a hidden mesh (shown in this image with a wireframe.) We're still displaying our object space normals, but our object space is now the exact same space as our camera space.

What if we don't want to have a camera? Can we use the location of the viewport eye as a space? No. While there are ways to use the location of the eye in shader nodes, there is no way to use it in geometry nodes. When you think about it, this makes sense: we might, after all, have two different viewports open; would Blender keep track of different geometry for each viewport? It works with shader nodes because, yes, each viewport does keep track of different samples, different renders.

$\endgroup$
10
  • $\begingroup$ With you most of the way... (Nice way to Camera Space) I could be wrong, but I thought the Blender matcap used this XYZ->RGB ? BTW, thanks for your edit yesterday. $\endgroup$
    – Robin Betts
    Commented Dec 26, 2022 at 18:48
  • 1
    $\begingroup$ @RobinBetts It's hard to say exactly how that matcap is built, since that description differs from mine by 0.5/255 in one channel, and we don't have a center texel for the matcap. I suppose it's technically computable from the adjacent texels? Seems like not worth worrying about to me. $\endgroup$
    – Nathan
    Commented Dec 26, 2022 at 19:10
  • $\begingroup$ Oops, of course. the 128-255 Z makes no difference with no negative Z's. I was wrong :) sorry $\endgroup$
    – Robin Betts
    Commented Dec 26, 2022 at 19:26
  • 1
    $\begingroup$ You can use Map Range in Vector mode. $\endgroup$ Commented Dec 27, 2022 at 10:14
  • 1
    $\begingroup$ @MarkusvonBroady Thanks, I didn't realize that! Wish I could do the same in shader nodes. $\endgroup$
    – Nathan
    Commented Dec 27, 2022 at 16:14

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .