19
$\begingroup$

The depth map generated by the Cycles rendering engine seems to have (radial) distortion, while the Blender renderer does not.

However, I could not find anywhere information on the internal camera model used by the Cycles engine, leading to this very post.

Specifics

With the Blender default scene and default parameters, I use a Viewer node connected to the Z Render layer and a python script to read the generated depth map, compute 3D point coordinates and export as OBJ.

The node graph is very simple:

enter image description here

The export python code too:

import bpy
import numpy as np

print("exporting depth pointcloud to OBJ...")

scale = bpy.context.scene.render.resolution_percentage / 100

pixels = bpy.data.images['Viewer Node'].pixels
img = np.array(pixels[:]).reshape(int(bpy.context.scene.render.resolution_y * scale), int(bpy.context.scene.render.resolution_x * scale),-1)

camdata = bpy.data.objects["Camera"].data

sensor_height = bpy.context.scene.render.resolution_y / bpy.context.scene.render.resolution_x * camdata.sensor_width

f = open("/tmp/pointcloud_blender.obj","w")

for u in range(0, img.shape[1]):
    for v in range(0, img.shape[0]):

        d = img[v, u][0]

        if d > 100.0:
            continue

        x = d * (0.5 - float(u) / float(img.shape[1])) * camdata.sensor_width / camdata.lens;
        y = d * (0.5 - float(v) / float(img.shape[0])) * sensor_height / camdata.lens;
        z = d;

        f.write("v " + str(x) + " " + str(y) + " " + str(z) + "\n")

f.close()

print("DONE")

When rendering with the Blender internal engine, the generated point cloud looks like the square as it should (white), but the Cycles rendering engine gives a distorted point cloud (red):

enter image description here

When looking at the depth maps, one can clearly see a radial distortion (Blender render vs Cycles render, exported with File Output node):

enter image description here

This is a cut of the depth maps (blue, green) and difference image (red) at row 240:

enter image description here

(the two depth maps being different are not really the problem here, as long as I have the correct unprojection model)

Research

I looked in the documentation, in this answer stating that the Blender camera has no distortion coefficients, on this blog that is the correct model I use for the Blender camera.

I also studied the code for Cycles (src/render/camera.cpp, src/util/util_projection.h, src/util/util_transform.h) but could not find any trace of radial distortion.

Question

Does anybody have an idea of the internal camera model used by the Cycles rendering engine ? Or how to compute correct camera intrinsic parameters ? I guess the lens has distortion, but I could not find parameters or even code running this.

I need to be able to use the point cloud from the depth maps from different views, but with distortion they are not usable.

Thanks !

$\endgroup$
2
  • 7
    $\begingroup$ Welcome to BSE. This is the best first post I have seen one this site. Very well done, the formatting, layout, sources, and research all excellent. I wish everybody put as much effort into their posts. Glad you found the answer (and then shared it here!), sorry we could not help. $\endgroup$
    – David
    Commented Feb 9, 2019 at 3:53
  • 1
    $\begingroup$ This completely slipped under my radar, great post, welcome to the community! $\endgroup$ Commented Mar 13, 2019 at 0:01

1 Answer 1

21
+200
$\begingroup$

Thanks to the Blender dev forum, I figured it out.

The Cycles camera is a pinhole model that "uses the distance between a given point and the pinhole as its Z depth". So we cannot apply the depth to points on the camera plane, but rather points that lie on the unit sphere, hence normalized.

Here is the correct formula to unproject a camera pixel into a 3D point :

    import math

    # coordinates on camera plane
    x = (0.5 - float(u) / float(img.shape[1])) * camdata.sensor_width / camdata.lens;
    y = (0.5 - float(v) / float(img.shape[0])) * sensor_height / camdata.lens;
    z = 1.0;

    norm = math.sqrt(x*x + y*y + z*z)

    # normalize = project point on unit sphere, then apply depth
    x = d * x / norm
    y = d * y / norm
    z = d * z / norm

The relevant code is in kernel_camera.h

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .