3
$\begingroup$

Hey there Blender Community,

I've come across an odd problem with rendering. When I render an image at 256*256 the image is blurry. But when I render the image at 1080*1080 and shrink the image such that it takes up the same space on the screen as the 256*256 image, the quality is markedly better.

The difference is subtle, but here's the proof:

This is the 256*256 image:

enter image description here

This is a 1080*1080 image, shrunk to 256*256:

enter image description here

The resized image is less blurry, the shadows are less grainy, and the overall quality just seems ineffably better.

Why is this so? If nothing in terms of sampling has been changed, and the final product is the same size in terms of pixel space--why is the quality of one superior? And is there any way to achieve comparable quality without having to render and downsize?

I ask because I'm rendering a ton of images and video and am trying to find the optimal method to do so. Obviously, increasing the resolution of the render is going to increase the rendering time. Is there a way to increase quality without increasing time?

Edit: I've noticed this discrepancy in both cycles and internal, even with the sampling fully cranked in cycles.

2nd Edit: Below are what I believe to be the relevant dimensions of my render settings. Note, however, that these were held constant across the 256*256 render and the 1080*1080 render. I do recognize that my raytracing samples (constant QMC) are low, but varying this parameter didn't seem to accomplish anything meaningful for a scene this simple.

enter image description here enter image description here

$\endgroup$
2
  • 1
    $\begingroup$ Probably because your render has insufficient number of samples light and low anti aliasing quality. By rendering a larger image and shrinking it you are likely nullifying said issues by rendering a high resolution and benefiting from downsizing algorithms. Impossible to know without looking at your render settings $\endgroup$ Commented Apr 2, 2018 at 18:44
  • $\begingroup$ @DuarteFarrajotaRamos I added what I think are the relevant render settings. I didn't notice substantial effect of these settings on my scene outside rendering at a higher resolution. $\endgroup$ Commented Apr 2, 2018 at 19:00

3 Answers 3

6
$\begingroup$

If you render at 256x256, each pixel gets only 16 samples of AA.

If you increase the resolution to 1080x1080, the same area is now covered with 16 pixels (4x4). So the area now get's 16 times more AA samples.

By rendering larger you get also less noise in the image on the Ambient Occlusion and that also helps with the clarity when sampled down to small size.

$\endgroup$
1
  • $\begingroup$ Thanks for this answer! Between yours and @RobinBetts answer, I think I have better idea of why the higher resolution and downsampling does what it does. $\endgroup$ Commented Apr 3, 2018 at 4:16
5
$\begingroup$

The cases are not strictly comparable.. the downsampling is taking place in different domains. in the shrunk 1080 case, a high-resolution sample of the 3D scene is reduced by interpolation in 2D.

If you like, in the 1080 case, you are sub-pixel sampling the final 256 image by about 4x4. There is about 16 times as much information contributing to the approximation of the color of each of the 256x256 pixels, than if you only sampled the 3D scene at 256x256 in the first place.

$\endgroup$
1
  • $\begingroup$ Much obliged for the answer! $\endgroup$ Commented Apr 3, 2018 at 4:16
2
$\begingroup$

I think the source of the confusion is that the number of samples are counted per-pixel, and not per image. This means that with the same number of samples per pixel (spp), more pixels = more samples.

Compare a 10x10 pixel image: $10*10*1=100$ samples.

With a 100x100 pixel image: $100*100*1=10,000$ samples.

You can then reorganize the 100x100 pixel image by combining boxes of 10x10 pixels, and you will get a 10x10 image that effectively holds 10x10=100 samples per pixels.

Rendering with high resolution few samples:

  • As you have discovered, you have the flexibility of downscaling to reduce noise later.
  • Some quality may be lost when resizing the image.
  • Can use more memory and be slightly slower to render.

Rendering with low resolution more samples:

  • Can use less memory and be slightly faster to render
  • Less flexible, you cannot upscale the image just because you have more samples.
$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .