Hey there Blender Community,
I've come across an odd problem with rendering. When I render an image at 256*256 the image is blurry. But when I render the image at 1080*1080 and shrink the image such that it takes up the same space on the screen as the 256*256 image, the quality is markedly better.
The difference is subtle, but here's the proof:
This is the 256*256 image:
This is a 1080*1080 image, shrunk to 256*256:
The resized image is less blurry, the shadows are less grainy, and the overall quality just seems ineffably better.
Why is this so? If nothing in terms of sampling has been changed, and the final product is the same size in terms of pixel space--why is the quality of one superior? And is there any way to achieve comparable quality without having to render and downsize?
I ask because I'm rendering a ton of images and video and am trying to find the optimal method to do so. Obviously, increasing the resolution of the render is going to increase the rendering time. Is there a way to increase quality without increasing time?
Edit: I've noticed this discrepancy in both cycles and internal, even with the sampling fully cranked in cycles.
2nd Edit: Below are what I believe to be the relevant dimensions of my render settings. Note, however, that these were held constant across the 256*256 render and the 1080*1080 render. I do recognize that my raytracing samples (constant QMC) are low, but varying this parameter didn't seem to accomplish anything meaningful for a scene this simple.