3
$\begingroup$

I am trying to render objects with random noise generated textures in Cycles. I am facing this issue: the texture is almost lost after the render

I am using this material configuration: enter image description here

When I start the render, my cube looks like this: enter image description here

When the render is finished it looks like this: enter image description here

I have seen that setting the samples to 0 produces the first image also as final render, but that's not what I want.

$\endgroup$
1
  • 1
    $\begingroup$ Maybe you have enabled the Denoiser, which will blur the noises $\endgroup$
    – moonboots
    Commented Feb 4 at 22:19

1 Answer 1

8
$\begingroup$

The White Noise Texture always generates a random greyscale value or color per pixel. The texture is not "lost" after rendering, the only reason you can see the colored noise in the beginning is because of the low number of samples the image is not yet fully refined. The random distribution of the values per pixel will result in a medium grey, even if it is colored - because it is actually not 1 color per pixel, because that one pixel is already an average of sub-pixel random values. Not even that. It is just that in a rendered image 1 pixel is the highest resolution you can get.

When you additionally also have the denoiser enabled, the random colors which seem to be grey will then be averaged to actual grey. But even without denoising, the result is not what you expect it to be as it is a kind of anti-aliased noise, here a comparison between denoised and noisy image, zoomed in 400%:

basic white noise

If you want to retain noise of a certain visible size, you have to manipulate the coordinates to get discrete areas which then obtain a random value from the White Noise Texture. I do not know the right terms to explain that accurately, so here is an example:

snapping coordinates for white noise

In the example I use a Vector Math node set to Snap to separate the coordinates into steps. The Increment values determine how large the noise will be.

When you use the coordinates without snapping, the White Noise Texture will assign a random color to each value of the coordinates no matter how much you zoom in on it which will result in a random color for each pixel or simply an overall grey appearance in the render.

If you snap the coordinates to only change in a specified step size, this random assignment will only change per increment, this way you can control the noise size. The following image shows the coordinates with a larger increment to make it easier to see:

stepped coordinates

If you now plug a White Noise Texture after the Snap node, the random color will only change on each new increment:

stepped random colors

//EDIT: If you always want to keep the noise size at exactly 1 pixel, you can do the following: instead of using the Object output of the Texture Coordinate node, use the Window node. This way you only have to set the X and Y increment of the Snap node to the reciprocal of the image width and height, which means when you want to render an image of 1920 × 1080 pixels, the increment X needs to be 1/1920 and the increment Y 1/1080. The Z value is irrelevant in this case. This should even keep the noise visible after denoising.

The only problem with this will occur if you want to animate the object, the noise stays exactly where it is in the image and is not moving with the object. Which is of course normal since the coordinates (the center and borders of the image) are not moving. Also if the perspective of a colored pixel would move with the object, it might not stay at the size of exactly one pixel so it cannot be transformed with the object.

You can of course enter these values directly, but to make it more comprehensible and easier to enter, I can do this with some additional Math nodes and a Combine XYZ node:

snapped to window size

At last, a final example why you cannot have a White Noise Texture to be always exactly one pixel per random value but also be fixed on an object's surface and transform along with it (just in case this is not obvious):

To make it simple, let us say a camera looks frontal on a cube and you match the noise texture to be exactly one pixel per rendered cube pixel. In the frontal view, the cube appears rendered with a size of 864 × 864 pixels, and each pixel has a random color.

But if the cube would be rotated now or the camera moves around it, the perspective distortion will change the edge lengths of the cube - if you want the texture to stay on the cube where it is, it is no longer possible to have it at 1 pixel per random value. Because all edges (or the apparent widths of faces from left to right border) have now lengths different from the original 864 pixels.

perspective distortion

To draw a final conclusion: the solution with the Window coordinates is suitable for still renders if you want to make sure the random values change on each pixel. For animations or in any form renders where the perspective or the position of the noise on the object is of any relevance, you should go with the Object coordinates, or maybe Generated or UV - at least some which are dependent on the object's trasnformations, but then the noise size will vary.

$\endgroup$
8
  • $\begingroup$ I undestand. So there is no way to get a pixel per pixel different value in the final render? $\endgroup$
    – xWourFo
    Commented Feb 4 at 22:38
  • $\begingroup$ @xWourFo Yes of course, as I said - you cannot use denoising if you want to preserve the pixel by pixel randomness. Or use the Noisy Image output in the Compositor. But, what you show in your first example is not the pixel by pixel result, in the beginning with very few samples some color spots are larger than 1 pixel and usually the distribution and hue and saturation is not as uniform as in the finished render. As I said, even the colored white noise will still appear more or less grey. $\endgroup$ Commented Feb 4 at 22:43
  • $\begingroup$ @xWourFo Oh, actually there is a solution. I'll add it to my answer. $\endgroup$ Commented Feb 4 at 22:57
  • $\begingroup$ I'm not saying so for sure,, but would it be more accurate to describe White Noise as returning a (discontinuous) random value per shading-point ? $\endgroup$
    – Robin Betts
    Commented Feb 5 at 10:12
  • $\begingroup$ @RobinBetts I'm open for any attempt on describing what the actual output of the White Noise is. The Blender manual itself says in the introduction "The White Noise Texture node returns a random number based on an input Seed. " While it says "a random number", this turns out to be a number per pixel no matter how much you zoom in or out - unless you specify a size by manipulating the coordinates e.g. with a Snap node. I find the term "discontinuous" problematic because to me it sounds like there was a definite boundary between random values, while zooming in seems to refute this. $\endgroup$ Commented Feb 5 at 10:29

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .