1
$\begingroup$

I have an object in my scene and I wonder how I can render it so that each pixel will have the distance from the camera in metrics unit? where the the nearer to the camera it will have whiter colour and the further from the camera will have blacker colour and greyish in between. I don't want to use normalization as this will ulter the correct Z depth values so I'm trying to use the Map Value node but I'm not sure how to set the correct values where logically the pixels nearer to the camera will have smaller z values than pixels far from the camera.

This is my current status:

enter image description here

As seen in the below image, the z values of the closer parts is bigger than the values of further parts, I don't know how to fix this:

enter image description here

My .blend file can be found here:

$\endgroup$

1 Answer 1

1
$\begingroup$

This is the default formatting of Blender's z-buffer, so I'm not sure where you're stuck? Are you trying to get these values into the RGB channels? If so, you can simply copy them like this:

node setup using combine RGBA to copy z channel to RGB

Remember to save this in a linear floating point format (such as OpenEXR). Formats such as PNG will mangle this data, as they are not meant to store depth.

I should probably add, using the Combine RGBA node here isn't strictly necessary, it still works if you connect Z directly the Image output. Using the Combine RGBA just lets you specify each channel explicitly to be on the safe side.

$\endgroup$
11
  • $\begingroup$ Thanks for your answer. I'm stuck and want two things, first I want to adjust the correct values of the map value node so that it's perfect greyscalled (white nearest point to the camera, black furthest point from the camera). Second, I want to be able to have the metric distance from the camera when dragging along the image in the preview where pixels closer to the camera will have lower values that pixels far from the camera. Do you get what I mean? $\endgroup$
    – Tak
    Commented Feb 7, 2017 at 2:10
  • $\begingroup$ @Tak You can't "perfectly grayscale" metric values into RBG channels. You scene is in Blender units and an object can dist anywhere from zero to infinity from camera, while a pixel can only store 256 different shades of grey, how do you expect to unequivocally map metric values into shades of grey? Only using Float images as mentioned by JtheNinja $\endgroup$ Commented Feb 7, 2017 at 2:21
  • $\begingroup$ @DuarteFarrajotaRamos thanks for the info. I'm not trying to map with infinity I'm trying to set the correct values of the map node. I also don't know why the points closer to the camera has more values than points farther from the camera. I've updated my question and added another screenshot. $\endgroup$
    – Tak
    Commented Feb 7, 2017 at 2:31
  • $\begingroup$ If you set the scale for the scene to meters, you will have meters as the units for the Z channel. The correct Z values will be exported if you save your file to an EXR file. Do avoid normalization and any other image format unless you are absolutely certain that you know how to deal with any scale conversion and avoid any distortion caused by the view transform or the bit depth of the chosen format. Use the clipping points for the camera to determine the extremes of the scale. $\endgroup$
    – user1853
    Commented Feb 7, 2017 at 2:50
  • $\begingroup$ Only would add aside from listening to JtheNinja's and Cegaton's sagely advice, is to not expect to see depth. That is, the data represents a linear interpretation of the data, and to visualize it according to some visual metric would require a complex transform. The sRGB EOTF would not be suitable here, but rather a log-like transform where you are aware of the upper and lower bounds. $\endgroup$
    – troy_s
    Commented Feb 7, 2017 at 4:23

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .