Problem:
As shown in the figure above, I have the coordinate pair (x, y)
which describes a certain point in an image texture and I want the corresponding coordinate pair (x_, y_)
which describes the same point in the texture inside of the rendered image.
Details:
The texture is applied onto a plane primitive and the texture mapping mode is "Generated", not "UV". The plane as well as the camera can be rotated and translated arbitrarily, but not to such an extent that the front side of the plane is not visible in the rendering anymore. Furthermore, the plane is scaled in x-direction to match the texture's aspect ratio. I am using Blender 2.79.
The task can be split into to subtasks:
Calculate the corresponding (x, y, z) coordinate triple of the point
p
on the plane's surface in 3D-space, onto which the point of interest gets mapped in object space.Project this 3D-point p in object space into the 2D-space of the rendered image via world space and camera space.
Theory for subtask 1:
The plane's vertices are:
(-1.0000, -1.0000, 0.0000)
(1.0000, -1.0000, 0.0000)
(-1.0000, 1.0000, 0.0000)
(1.0000, 1.0000, 0.0000)
as confirmed by:
for v in sign.data.vertices: print(v.co)
while texture coordinates lie within [0, tex_width]
for x
and [0, tex_height]
for y
.
Therefore, p.x
, p.y
and p.z
should be:
p.x = x / tex_width * 2 - 1
p.y = y / tex_height * 2 - 1
p.z = 0
(Divide by texture width and height to rescale coordinates to range [0, 1], multiply by plane width and height (both equal to 2 blender units) to rescale coordinates to range [0, 2] and subtract 1 to shift to range [-1, 1].)
Theory for subtask 2:
The point p
should be projected onto the image plane via the model-view-projection matrix (MVP-matrix) which is composed as follows:
mvp_matrix = projection_matrix * view_matrix * model_matrix
All four matrices are 4x4. For projection, p
is given a fourth component w = 1
, to make it homogenous. Now it can be projected to p_
in camera space as follows:
p_ = mvp_matrix * p
The coordinates of p_
should be in range [0, 1]
. The desired coordinates (x_, y_)
can then be calculated by rescaling p_.x
and p_.y
to the ranges [0, res_x]
, [0, res_y]
, res_x
and res_y
being the resolution of the rendered image:
x_ = p_.x * res_x
y_ = p_.y * res_y
Code:
import math
from mathutils import Vector
import bpy
# set up scene
scn = bpy.context.scene
scn.render.engine = "CYCLES"
for o in bpy.data.objects:
if o.type == "MESH": # delete cube
bpy.data.objects.remove(o, do_unlink=True)
# load texture, calculate aspect ratio
im_tex = bpy.data.images.load("texture.png")
tex_w, tex_h = im_tex.size
ratio = tex_w/tex_h
# add plane and rescale, rotate and translate it
bpy.ops.mesh.primitive_plane_add()
plane = bpy.context.active_object
plane.scale = (ratio, 1., 1.)
plane.location = (2, 10, 1)
plane.rotation_euler = (math.radians(90),
math.radians(5),
math.radians(-15))
# set up material and apply texture
mat = bpy.data.materials.new("Material")
mat.use_nodes = True
mat.node_tree.nodes.clear()
input_node = mat.node_tree.nodes.new(type="ShaderNodeTexCoord")
tex_node = mat.node_tree.nodes.new(type="ShaderNodeTexImage")
tex_node.image = bpy.data.images.load("texture.png")
bsdf_node = mat.node_tree.nodes.new(type='ShaderNodeBsdfPrincipled')
output_node = mat.node_tree.nodes.new(type='ShaderNodeOutputMaterial')
links = mat.node_tree.links
link_0 = links.new(input_node.outputs["Generated"], tex_node.inputs["Vector"])
link_1 = links.new(tex_node.outputs["Color"], bsdf_node.inputs["Base Color"])
link_2 = links.new(bsdf_node.outputs["BSDF"], output_node.inputs["Surface"])
plane.data.materials.append(mat)
# set up camera, rotate it and translate it
cam = bpy.context.scene.camera
cam.rotation_euler = (math.radians(90), 0, math.radians(5))
cam.location = (0, -1, 0)
# render and save blend-file for debugging
bpy.ops.wm.save_as_mainfile(filepath="debug.blend")
scn.render.filepath = "rendering.png"
bpy.ops.render.render(write_still=True, use_viewport=True)
# now for the interesting part:
(x, y) = (150, 90) # arbitrary point on texture
# Let's assume the texture resolution is (745, 543)
# subtask 1:
p = Vector((x * 2 / tex_w - 1.0,
y * 2 / tex_h - 1.0,
0,
1))
res_x = scn.render.resolution_x * scn.render.resolution_percentage / 100
res_y = scn.render.resolution_y * scn.render.resolution_percentage / 100
asp_x = scn.render.pixel_aspect_x
asp_y = scn.render.pixel_aspect_y
# Projection matrix:
p_mtx = cam.calc_matrix_camera(res_x, res_y, asp_x, asp_y)
# This corresponds to the model-view-matrix:
mv_mtx = cam.matrix_world.inverted() * plane.matrix_world.copy()
mvp_mtx = p_mtx * mv_mtx
# subtask 2:
p_ = mvp_mtx * p
p_x, p_y, _, _ = p_
x_ = res_x * p_x
y_ = res_y * p_y
print(x_, y_)
The result for this example is 4471.2872314453125 851.4681959152222
, which isn't even in the rendering which has a resolution of (960, 540)
. The desired output would be something like 692.0, 104.5)
.
Questions:
- Is there any error in my subtask theories?
- What is wrong with my implementation?
Appendix:
texture.png:
Resolution (745, 543), mark at (150, 90)
rendering.png: