The depth map generated by the Cycles rendering engine seems to have (radial) distortion, while the Blender renderer does not.
However, I could not find anywhere information on the internal camera model used by the Cycles engine, leading to this very post.
Specifics
With the Blender default scene and default parameters, I use a Viewer node connected to the Z Render layer and a python script to read the generated depth map, compute 3D point coordinates and export as OBJ.
The node graph is very simple:
The export python code too:
import bpy
import numpy as np
print("exporting depth pointcloud to OBJ...")
scale = bpy.context.scene.render.resolution_percentage / 100
pixels = bpy.data.images['Viewer Node'].pixels
img = np.array(pixels[:]).reshape(int(bpy.context.scene.render.resolution_y * scale), int(bpy.context.scene.render.resolution_x * scale),-1)
camdata = bpy.data.objects["Camera"].data
sensor_height = bpy.context.scene.render.resolution_y / bpy.context.scene.render.resolution_x * camdata.sensor_width
f = open("/tmp/pointcloud_blender.obj","w")
for u in range(0, img.shape[1]):
for v in range(0, img.shape[0]):
d = img[v, u][0]
if d > 100.0:
continue
x = d * (0.5 - float(u) / float(img.shape[1])) * camdata.sensor_width / camdata.lens;
y = d * (0.5 - float(v) / float(img.shape[0])) * sensor_height / camdata.lens;
z = d;
f.write("v " + str(x) + " " + str(y) + " " + str(z) + "\n")
f.close()
print("DONE")
When rendering with the Blender internal engine, the generated point cloud looks like the square as it should (white), but the Cycles rendering engine gives a distorted point cloud (red):
When looking at the depth maps, one can clearly see a radial distortion (Blender render vs Cycles render, exported with File Output node):
This is a cut of the depth maps (blue, green) and difference image (red) at row 240:
(the two depth maps being different are not really the problem here, as long as I have the correct unprojection model)
Research
I looked in the documentation, in this answer stating that the Blender camera has no distortion coefficients, on this blog that is the correct model I use for the Blender camera.
I also studied the code for Cycles (src/render/camera.cpp
, src/util/util_projection.h
, src/util/util_transform.h
) but could not find any trace of radial distortion.
Question
Does anybody have an idea of the internal camera model used by the Cycles rendering engine ? Or how to compute correct camera intrinsic parameters ? I guess the lens has distortion, but I could not find parameters or even code running this.
I need to be able to use the point cloud from the depth maps from different views, but with distortion they are not usable.
Thanks !