I am using Unreal Engine to export custom render passes for the scene depth in world units. Every pixel in the render pass has a value which is the distance from the camera to that pixel in standard UE4 units. For example, a pixel with a value of 500 (more precisely 500,500,500) would be 500cm away from the camera.
I am then taking these render passes and trying to project them into point clouds inside Houdini. I have this all working, but I'm having trouble figuring out how to correct for the distortion that arises in the point cloud.
In my attempt to understand this, I have created a box which is the same aspect ratio as my output (1920 x 1080) and positioned the camera in such a way that it fills the frame more or less exactly.
Below is the point cloud, with a wireframe showing where the box was in Unreal Engine. After distortion correction, all those points should align with the face of box nearest the camera on the left. The centre pixels are in the correct place, but as you can see the further away from the centre the more offset they become from their true spatial location.
I understand that this distortion is arising because the render pass is essentially drawing a line from each pixel to the camera, so the further away from centre, the longer the line, giving the false impression of increased depth across what should be a flat plane.
I have been doing a lot of experimenting with varying distances, sizes of boxes, focal lengths, and so on, trying to figure out what the formula is to correct this distortion, but I'm completely stuck.
I feel like the formula must involve a combination of the distance of camera to pixel, as well as the angle formed between the camera-to-pixel line and the line pointing straight out of the camera. But I might be totally off-base, because I'm not a mathematician by any means.
Can anyone help me out or point me in the right direction?