I am writing a toy operator which select the visible vertices of a mesh from active camera.
My approach is to do a perspective projection on vertices of that mesh to NDC(Normalized Device Coordinates. It is a glossary from rasterization graphic API like OpenGL) and check that whether the NDC is in range of [-1, 1]^3.
I encounter a problem when I construct the projection matrix using camera.view_frame() function: It seems that view_frame() returns double of the real size of camera's sensor.
I am confused with this behavior of view_frame().
Supplementary background: I write a Blog post for details about this toy operator on http://linearconstraints.net/?p=53
And the full source code is on https://gist.github.com/thebusytypist/8900746
Note that I divide the left/right/bottom/top by 2 in line 75-76.
Camera.clip_start
with the z-values ofCamera.view_frame()
it seems it is being doubled. There isCamera.clip_end
,Camera.fov
and you might get the aspect ratio fromScene.render.resolution_x/y
. An explanation how exactly blender does the calculation would indeed be helpful. $\endgroup$