I am attempting to simulate structured light 3D scanning within Blender (v2.78c), however have encountered an unexpected alignment issue that I have have not been able to resolve thus far.
I have isolated the issue in a simple blend file which has:
- One simple plane surface to scan
- One camera
- 2001x1080 resolution (and retested with 2048x1080)
- 49.134d FOV
- One spotlight
- 49.134d FOV
- same position and orientation as camera
- distance of 10.44 that lies on the plane
- constant falloff
- blend of 0
The assumption was that rendered outputs from the camera would produce single lines captured exactly corresponding to the images projected, however the center of the captured lines appears to be consistently offset.
The simple blend zip file also contains three images to be projected, and their rendered captures, each is black with a single pixel vertical line at x=500, x=2000 and x=1500. Here are the captured lines, zoomed in:
The gray column to the left of the center of the captured line is consistently darker than that of the gray column to the right of the captured line, meaning once sub-pixel accuracy is calculated, the column is always computed as being located slightly to the right of the actual column. The expected output is gray columns of equal intensity either side of the captured column, given the same location/orientation of the camera and spotlight, and the same FOV.
If anyone could shed some light (pun intended) on the possible cause and/or resolution of this issue, it would be most appreciated.
EDIT: Here are the captures using a camera resolution of 2048x1080, unfortunately the issue persists in an updated blend file.