If we want to display the normals of an object like they are displayed in Blender-- possibly with a matcap, but possibly with a few other different methods-- the main thing we need to do is map the normals from their existing -1,1 range to the visible 0,1 range:
![enter image description here](https://cdn.statically.io/img/i.sstatic.net/O8UHr.jpg)
Here, I'm using geometry nodes to create an attribute representing our remapped, object space normals. We can see on the Suzanne on the left, that isn't what you expect, but on the Suzanne on the right, who has been rotated so that her Z axis points roughly toward our view, that's what you expect.
So with the matcap, what we're actually interested in is our camera space normals. To do that, we need to know what our camera space is. The easiest way to do that (not the only way, but the easiest) is to simply instance our geometry from an object that copies the transform of the camera:
![enter image description here](https://cdn.statically.io/img/i.sstatic.net/yGBWE.jpg)
Here, I have a different mesh object, copying transforms from the camera via a constraint, and then instancing a hidden mesh (shown in this image with a wireframe.) We're still displaying our object space normals, but our object space is now the exact same space as our camera space.
What if we don't want to have a camera? Can we use the location of the viewport eye as a space? No. While there are ways to use the location of the eye in shader nodes, there is no way to use it in geometry nodes. When you think about it, this makes sense: we might, after all, have two different viewports open; would Blender keep track of different geometry for each viewport? It works with shader nodes because, yes, each viewport does keep track of different samples, different renders.