34
$\begingroup$

I made a high poly model and then baked its textures onto a low poly model.
Now I want to give another normal map's details to the model. How can this be done?

$\endgroup$
5
  • 1
    $\begingroup$ I find this article about the argument quite interesting: blog.selfshadow.com/publications/blending-in-detail. I didn't tried (yet), but at first look, seems to me that similar results can be achievable with nodes. $\endgroup$
    – Carlo
    Commented Sep 9, 2015 at 20:24
  • 2
    $\begingroup$ If you are using Internal renderer, adding a new texture and setting its influence to normal will do it. With Cycles you can go for the Displace effect. I don't know about the way to mix a proper normal map with the BW factor of a texture though. $\endgroup$
    – Yvain
    Commented Sep 10, 2015 at 0:09
  • $\begingroup$ The thing is that I want also to be able to somehow export the texture of both normal maps combined, because I will be importing the model into Unity Game Engine, and all the node data just disapears as Unity isn't able to read/receive external nodes(not sure). Also there is only one slot for a Normal Map. $\endgroup$
    – A.D.
    Commented Sep 10, 2015 at 11:51
  • $\begingroup$ I made an OSL shader script for this purpose (see the "bonus" script at the end): blender.stackexchange.com/a/51624/131 $\endgroup$
    – dimus
    Commented May 15, 2016 at 16:08
  • $\begingroup$ I made a node group that contains all your mentioned methods, you can switch or blend between them. I also gave credit to you guys, thanks! dropbox.com/s/yuw85rsfeeoqla6/Combine_Normal_Maps.blend?dl=0 $\endgroup$
    – John McDon
    Commented Apr 29, 2018 at 12:54

16 Answers 16

28
$\begingroup$

I adapted the UDN Blending method from this article, for cycles nodes.

The simple formula is:
"Normal map 1" (nm1) the large distortion, "Normal map 2" (nm2) the small details.
nm1.x + nm2.x = X, nm1.y + nm2.y = Y, nm1.z = Z

cycles material nodes

  1. Split the normal maps in the their three separate channels with a Separate XYZ node.
  2. Add the X channel of both normal maps together with a Math node set to Add
  3. Add the Y channel of both normal maps together with a Math node set to Add
  4. Make the new vector with a Combine XYZ node. Plug the component X into the X input of the Combine XYZ node, and the component Y into the Y. Take the Z from the first normal map.
  5. Add a Vector Math node set to Normalize. Take the output from the Combine XYZ node and plug it into the first slot of the Normalize node.
  6. Add a Normal Map node. Take the output from the Normalize node and plug it into the color slot of the Normal Map node.

Another option is to use Multiply instead of Add for combining the X and Y channels. The bonus with this is, I have found no need for the Normalize node. The resulting normal map will look different from the method above.

$\endgroup$
3
  • $\begingroup$ That is good !, but is there a way to save this texture so I can give it to my model in a game engine like unity $\endgroup$
    – A.D.
    Commented Sep 11, 2015 at 13:50
  • 1
    $\begingroup$ I tried your node setup, but that lead me to a non-symmetrical shading (i.imgur.com/y9rUZ7L.jpg) that in my opinion is not correct. Did I make any mistake? $\endgroup$
    – Carlo
    Commented Sep 11, 2015 at 14:02
  • $\begingroup$ What I would do is use both normal map with each 0.5 weight (in blender material) then bake the normals -> both will be combined :) $\endgroup$
    – Yvain
    Commented Dec 2, 2016 at 16:50
20
$\begingroup$

I tried both methods presented here (by David and Hellfireboy), but neither of them seemed to work correctly (at least for image textures).

Normal map method comparison

I kept searching online and eventually found this nodesetup.

It's very complicated, and I honestly don't understand it, but it works superbly. Thought I'd post it here for anyone else looking.

$\endgroup$
3
  • $\begingroup$ Hmmm any more thought on your method now? $\endgroup$ Commented Sep 21, 2017 at 14:54
  • 9
    $\begingroup$ that nodesetup comes from this thread: blenderartists.org/forum/… $\endgroup$
    – Secrop
    Commented Feb 14, 2018 at 14:47
  • 3
    $\begingroup$ Please attach node tree. External links are not allowed for core informations as they are not permanent. Thank you keeping site useful. $\endgroup$
    – vklidu
    Commented Nov 7, 2020 at 7:34
13
$\begingroup$

just adding my 2 cents (not sure why nobody mentions this technique), I do know there are supposed to be some technical drawbacks to this, but I've used it extensively without any issues...

Node Setup: enter image description here

Render: enter image description here

NormalMaps enter image description here

$\endgroup$
2
  • 2
    $\begingroup$ I can't find what is wrong with this method. Thanks I guess this whole discussion points out the lack of a ''Mix vector'' node in blender anyway. $\endgroup$
    – HellrazorX
    Commented May 2, 2018 at 2:21
  • $\begingroup$ @HellrazorX Because the method is not 'physically' correct. The article mentioned above goes into detail as to why. Here's the link: blog.selfshadow.com/publications/blending-in-detail $\endgroup$
    – Hash
    Commented Aug 29, 2023 at 6:43
11
$\begingroup$

A method close to what Rico Cilliers proposes is to mix the 2 normal images into a MixRGB in Mix mode, with the factor value at 0.5, and push the Normal Map strength up to 2:

enter image description here

enter image description here

$\endgroup$
5
  • $\begingroup$ Wow, so simple. Thank you for the nodes. $\endgroup$ Commented Jan 26, 2021 at 11:32
  • 1
    $\begingroup$ well, some will say that it's not rigorous but I tried a most sophisticated method and didn't see any difference :) $\endgroup$
    – moonboots
    Commented Jan 26, 2021 at 12:49
  • $\begingroup$ Took 6 years but we did it boys! 😂 $\endgroup$
    – A.D.
    Commented May 1, 2021 at 17:36
  • $\begingroup$ so did you test this solution? $\endgroup$
    – moonboots
    Commented May 1, 2021 at 17:39
  • $\begingroup$ It’s been years since I switched to MAX. Lol $\endgroup$
    – A.D.
    Commented May 2, 2021 at 18:54
5
$\begingroup$

Here's the method I came up with: Multiply the normal maps and then divide the output by the (non-color) color of a flat normal.

Normal map mix method

I use 0.50196 for Red and Green instead of 0.5 because it's the color of a flat normal for normal maps baked in Blender and the result is more accurate when compared with the normal maps rendered separately.

Normal map 1: Normal map 1

Normal map 2: Normal map 2

Combined: Normal maps combined

I tried the method from Blend Swap shared by Andre Price but it produces some kind of banding (I saved the output as PNG so it's not the result of lossy compression):

normal map banding

I made node groups for cycles and the compositor (notice that in the compositor the color space of the images must be sRGB, not Non-Color):

$\endgroup$
3
$\begingroup$

I found that this worked a lot better if you continued to treat the images as RGB rather than XYZ. This means using 'Separate RGB' and 'Color Mix' set to Add. RGB Separated Normal Combining

The reason being that using XYZ the image was coming out too dark (eyes and mouth are one normal map while the outer circle is a second one)

Here is what it looked like with RGB separation RGB Render

And this is with XYZ separation XYZ Render

$\endgroup$
3
$\begingroup$

Another solution to try and one that I think is quite simple.

Basicaly Add vector's but first subtract "plain" normal from the one that's being added.

enter image description here

$\endgroup$
3
  • $\begingroup$ Interesting method, it keeps the details of the first normal map I do have a question though, do you know how I could reduce the mix only in the area of the second normal map? Since this method washes out the second normal bit too much lol $\endgroup$
    – Racko
    Commented Jan 13, 2021 at 15:38
  • 1
    $\begingroup$ If want to change mix level of to maps overall then you can use "Vector Math" Converter with Scale operation AFTER the Subtract BUT BEFORE Add. You can either punch the strength of the Added in normal or weaken it. You can always change which normal is added in with subtract and which not. $\endgroup$
    – korda
    Commented Jan 13, 2021 at 21:34
  • $\begingroup$ Oh yea, thanks, I didn't realize there was a Vector: Scale $\endgroup$
    – Racko
    Commented Jan 15, 2021 at 15:15
2
$\begingroup$

In mathematical terms, this is impossible. Let's consider two parametrically defined surfaces f(x, y, z)=0 and g(x,y,z)=0 that we want to linearly blend to surface

$h(x,y,z) = f(x, y, z) + g(x,y,z) = 0$

The normal functions to this surfaces are

$f_n = normalize(grad(f))$

$g_n = normalize(grad(g))$

$h_n = normalize(grad(f+g)$

where

$grad(f) = (df/dx, df/dy, df/dz);$

$normalize(A) = A/length(A) = A / sqrt({Ax}^2 + {Ay}^2 + {Az}^2)$

This way you only have normalized gradient maps f_n and g_n called normal maps. You need to find the normalized gradient's sum map h_n. After normalization, you lose information about the length of the vector and there is no way to recover the length. You can try to simply blend the maps and this will give some approximate result, the accuracy of which will greatly depend on the ratio of the lengths of the gradient vectors, but this does not correspond to the mathematical definition of a normal map.

How to solve a problem mathematically correctly?

  1. Use bump or displacement maps. The sum of displacements is displacement of sum surface.
  2. Try saving a gradient map instead of a normal map. The sum of the gradients will be the gradient of sum surface. You just have to normalize the sum to get the normal map.
  3. If you do not have the original displacement map or orininal high poly model, you only have the option of restoring the displacement map from the normal map based on numerical integration.
$\endgroup$
3
  • $\begingroup$ Thank you for a giving a more mathematical perspective on this issue rather than a guess that may look alright on a single, basic example $\endgroup$
    – Sagie Levy
    Commented Jul 18, 2022 at 13:46
  • 1
    $\begingroup$ No, it's not mathematically impossible. A normal is a vector, and you can mathematically displace a vector with another vector by using quaternion rotation: blog.selfshadow.com/publications/blending-in-detail $\endgroup$
    – Mystiker
    Commented Jun 15, 2023 at 20:06
  • $\begingroup$ The article you cite says that the proposed algorithm is only an improved approximation of surfaces mixing. The problem cannot be solved mathematically locally at a point, rotation implies that you know the local tangent space after the displacement but this is not the case. $\endgroup$ Commented Feb 4 at 19:22
2
$\begingroup$

There is an article that goes into extensive depth about merging two normal maps.

The mathematically correct approach is to use Reoriented Normal Mapping (described in that article):

float3 t = tex2D(texBase,   uv).xyz * float3( 2,  2, 2) + float3(-1, -1,  0);
float3 u = tex2D(texDetail, uv).xyz * float3(-2, -2, 2) + float3( 1,  1, -1);
float3 r = normalize(t * dot(t, u) - u * t.z);
return (r * 0.5) + 0.5;

This is superior to UDN Blending, and it's far superior to using the Mix node.

I converted the Reoriented Normal Mapping code into a Blender node group, here is the node setup:

Blender node setup for Reoriented Normal Mapping

You plug in your Base normal map and your Detail normal map, and then you can plug the output into the Normal Map node:

Image showing how to use the Normal Map Merge node

$\endgroup$
1
$\begingroup$

Using some baked Tangent space normals maps, it looks like the radcapricorn's blendswap node group works well, but the same result with a simple setup can be achieved.
compare
It looks like combining the normal maps images give bad results, it's better to combine the normal map nodes vectors.
Here is my setup :
node
node detail
You can easily change any of the normal maps strength and fix the result modifying the color value of the color Mix/Divide node.

$\endgroup$
2
  • $\begingroup$ Just watch out that since you're dealing in world space (Bump node) and not tangent space (Non color data tangent space normal map) you should care for z axis too. Normal map node actually transforms from TS to WS. It just so happens that on a plane with 0 Z size and looking from the top, everything works. But once you get into different geometry that isn't flat like this, problems arise. $\endgroup$
    – Alphisto
    Commented Dec 11, 2016 at 11:28
  • $\begingroup$ @Alphisto Good to know, thanks. If I have time I will change my plane for a cube and bake some TS normal maps. $\endgroup$
    – Bithur
    Commented Dec 11, 2016 at 15:09
1
$\begingroup$

Using the MixRGB to mix two Normal Map nodes seems to be the correct way:

blender shader node mixing two normal maps

This was you can can control the strength of each normal map and blend them the way you prefer.

I found this answer in the tutorial here. https://blenderartists.org/t/blender-mix-2-normal-maps-together/1245369

$\endgroup$
1
$\begingroup$

I created my own, manual normal mapping node because of some bugs in Blender's treatment of tangent space normal maps. By doing it manually, I am able to create several useful inputs that allow the chaining of sequential normal maps:

enter image description here

The second normal map here acts in a new tangent space as provided by the first normal map-- it treats the first as a certain rotation of normals, and then rotates that vector again by the second.

Here is the manual normal map group:

enter image description here

I repair the tangent to be orthogonal to the normal and generate a normalized binormal vector. I create a new vector that is our remapped normal color multipled by our tangent/binormal/normal vectors.

There are some additional useful things in here, such as arbitrary tangent input (allowing for easy, cylinder or sphere mapped tangents), rotation of the tangent to match for normal map rotational mapping, and a fix for normal maps baked with different coordinate handedness.

The file also contains a node group to bake orthonormal tangent space normal maps as emission (likely, from object space normals, which Blender handles just fine.) As mentioned earlier, I created these node groups to deal with some bugs with Blender's treatment of normal maps, rather than as a way to explicitly answer this question, and that means that there are differences between the way that these node groups handle normal maps and the way that Blender handles normal maps. The way that I do it is correct; the way that Blender does it is not. This is likely to make normal maps baked in other applications more correct. (There are some exceptions. XNormal is based around Blender, meaning it imitates its errors.)

$\endgroup$
0
$\begingroup$

In my experiments on this from all the great ideas here, I noted that if you don't normalize the normal you will experience boosted subsurface scattering and not have proper highlights. I tried the differences with the second vector in the normalize node [aka Vector Math (Normalize)] being 1 or 2 or the 0.50196 on the first two values and there was really no difference of course because they are basically normal enough. The default or 0.5 is in my tests sufficient.

So in the end merging both Normals from a "Bump" (procedural texture) and a "Normal Map" (Baked from a High res sculpt to retopologized version), the usage of (two vector math nodes) add followed by normalize was sufficient. In this situation the original "Bump" was with 1/4 strength and 1/5th the distance, while of course the "Normal Map" was baked and set to strength 1.0, so I basically doubled (aka re-adjust to taste with your new setup ~ it seems they mitigate so maybe the smaller detail gets boosted and not the retopo match) the distance and Strength of the pre-merged Normal outputting nodes, to match the original (full high res sculpt and procedural temp texture) result best.

For me it seems like the easiest distillation of the ideas posted so far with some testing effort of rendering a real subsurface scattered head sculpt, (and therefor worth posting ~ I hope its helpful input/feedback).

As a Final note I'm not sure I won't just replace the procedural texture with a hand painted one (skin texture) [so combined] as that makes more sense in the end.

$\endgroup$
0
$\begingroup$

In Blender 2.8 there is a "Mix" node.
It just mixes two colors, then feed it to a normal map node:

enter image description here

$\endgroup$
2
  • $\begingroup$ Connecting Normal socket with Color one? Does it work at all? $\endgroup$ Commented Nov 1, 2019 at 11:04
  • $\begingroup$ Yeah it works... im assuming it takes it as a texture like the bottom node that is a normal map texture $\endgroup$
    – rrswa
    Commented Nov 1, 2019 at 11:09
0
$\begingroup$

I found this very good article on shadertoy: https://www.shadertoy.com/view/4t2SzR

I adapted three blending methods (Linear, UDN, RNM) to blender.

Blending Modes

Linear:

RNM

UDN

$\endgroup$
-1
$\begingroup$

Open both maps as layers in a graphics program (Krita and Gimp are free). Color filter one layer so that it closely matches the other (not always necessary). Erase through one layer to expose the other. Partially erase to combine (use the Erase tool with partial opacity). Simple, efficient file size, extreme portability.

$\endgroup$
1
  • $\begingroup$ Hello and welcome. How exactly is this related to Blender? $\endgroup$
    – Harry McKenzie
    Commented Mar 1, 2023 at 0:35

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .