3
$\begingroup$

I am using Unreal Engine to export custom render passes for the scene depth in world units. Every pixel in the render pass has a value which is the distance from the camera to that pixel in standard UE4 units. For example, a pixel with a value of 500 (more precisely 500,500,500) would be 500cm away from the camera.

I am then taking these render passes and trying to project them into point clouds inside Houdini. I have this all working, but I'm having trouble figuring out how to correct for the distortion that arises in the point cloud.

In my attempt to understand this, I have created a box which is the same aspect ratio as my output (1920 x 1080) and positioned the camera in such a way that it fills the frame more or less exactly.

Below is the point cloud, with a wireframe showing where the box was in Unreal Engine. After distortion correction, all those points should align with the face of box nearest the camera on the left. The centre pixels are in the correct place, but as you can see the further away from the centre the more offset they become from their true spatial location.

enter image description here

I understand that this distortion is arising because the render pass is essentially drawing a line from each pixel to the camera, so the further away from centre, the longer the line, giving the false impression of increased depth across what should be a flat plane.

I have been doing a lot of experimenting with varying distances, sizes of boxes, focal lengths, and so on, trying to figure out what the formula is to correct this distortion, but I'm completely stuck.

I feel like the formula must involve a combination of the distance of camera to pixel, as well as the angle formed between the camera-to-pixel line and the line pointing straight out of the camera. But I might be totally off-base, because I'm not a mathematician by any means.

Can anyone help me out or point me in the right direction?

$\endgroup$
2
  • 2
    $\begingroup$ Sorry, how are you getting these values? Are you simply reading the depth buffer (which is a linear function of the inverse value of Z) value for each pixel? If so, then for a plane, all the points should still lie in a plane if you map them (correctly) back into 3D. (Yes the distance from the camera for each point will vary and you would get a curve, but you shouldn't be using that) $\endgroup$
    – Simon F
    Commented Jul 20, 2017 at 8:36
  • 1
    $\begingroup$ I'm exporting custom render passes out of the Sequencer. The specific pass is 'Scene Depth World Units'. The standard depth pass ('Scene Depth') doesn't give actual world Z position in correct units. $\endgroup$ Commented Jul 20, 2017 at 9:17

1 Answer 1

1
$\begingroup$

I've figured a difference approach to projecting the point cloud inside Houdini.

I had assumed that the 'Scene Depth World Units' would provide the depth of each pixel along a vector parallel with the camera vector, rather than angled to point at the camera as a single point.

So rather than projecting the point cloud as if the depth was correct, and then trying to adjust for the distortion this creates, it makes more sense to just project the points in the same way they were created.

So, knowing the position and rotation of the camera, I can then use the depth pass to project the points outwards from this location.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.