15
$\begingroup$

Here's an illustration of what needs to be done:

enter image description here

A texture has to be projected from a point in space to a mirror (1 in the image) which will bounce the projection to a surface (2).

Here is the texture that is going to be projected although a real video file is going to be used.

enter image description here

The tricky thing is the mirror (the surface marked as 1 in the illustration) is not flat but curved as illustrated below:

enter image description here

The point is to see how the projected image or video will appear on the final surface (2) given a projection angle (fov) and a curved bounce mirror. This is for a video for a project where I need to illustrate how a curved mirror deforms a projection from a video projector, laser projector and such and in the animation the shape of the mirror gradually changes (via shapekeys) which in turn changes the size and deformation of the projected image/video.

If Blender can do this somewhat accurately I can use it for simulations as well which is a plus because of the superior render quality compared to specialized CAD programs.

Also if this is possible with Blender and we have an answer then it makes Blender a powerful tool for people working with video projectors.

$\endgroup$
9
  • 1
    $\begingroup$ See my answer here for the projection part blender.stackexchange.com/questions/57013/… Not sure how you can achieve the reflections though, Cycles is not very good at caustics. Also curious to see if anybody can come up with a solution $\endgroup$ Commented Sep 20, 2017 at 18:31
  • 3
    $\begingroup$ The reflections are not going to work unless you use a bidirectional path tracing engine like Luxrender. See: reflecting direct light on glossy surface in Cycles and How to create and animate the water reflection (Caustics) on an object? $\endgroup$
    – user1853
    Commented Sep 20, 2017 at 21:53
  • $\begingroup$ I don't mind using a different renderer. Feel free to post an answer with using on.e $\endgroup$
    – Leo Ervin
    Commented Sep 20, 2017 at 21:57
  • $\begingroup$ read the first link. There are a few alternatives mentioned in it. $\endgroup$
    – user1853
    Commented Sep 20, 2017 at 22:02
  • $\begingroup$ Might be able to hack a solution using OSL on the surface (2) to trace paths towards the mirror and reflect them to the projector, summing over all valid paths. I don’t know how efficient (or not) it will be as it would need to trace many hundreds of rays off each surface point to resolve the whole projection but it should be quicker than relying on caustics. $\endgroup$ Commented Sep 23, 2017 at 8:26

4 Answers 4

14
+200
$\begingroup$

As has been said, Cycles isn't capable of this. Luxrender, however, is. You can read about it on this forum and elsewhere. However, I'll say here that it integrates into Blender (previewing is even possible with the experimental API enabled as demonstrated below), and it handles refractive and reflective caustics very well. It is a bit slower but the versatility is worth it for much more accurate physics than Cycles.

I have reproduced your scene by creating a Projector Spot Lamp (using your image) and a Mirror material on the reflector. The reflector mesh has Subsurf and has Smooth Shading enabled. The "screen" has the default Matte material:

enter image description here

Here I am morphing the plane from flat, to convex, to concave:

enter image description here

You must install Luxrender before opening the .blend:

$\endgroup$
9
  • $\begingroup$ For anyone curious the error with the bove blend is caused by the texture file path not having been set to relative. Fix it and it works (provided you downloaded the texture image file). $\endgroup$
    – Leo Ervin
    Commented Sep 25, 2017 at 12:34
  • $\begingroup$ I assume it must be unpacked because of Luxrenderer? $\endgroup$
    – Leo Ervin
    Commented Sep 25, 2017 at 19:02
  • $\begingroup$ All answers are pretty clever and good, but yous to me is the most straightforward in my opinion so Im going to give you the bounty. But before I do. if there is a way could add in your answer how to add vertical lens shift to the virtual 'projector'. it might also be possible be adding black border on the top and bottom of the image and switching to UV mode but there's probably a simpler way. Lens shift in video vprojectors allows the projection to not shoot straight to the middle but say from ceiling to a blackboard but still keep both the top and bottom of the projection in focus. $\endgroup$
    – Leo Ervin
    Commented Sep 26, 2017 at 18:23
  • $\begingroup$ I think we misunderstood each other. Im talking about emulating "lens shift" found in real world video projectors. Lens shift is when the projection doesn't come out of the final lens and shoot straight to the middle but at an angle. Thats useful when the projector is not installed straight in the middle (vertically) to the projection screen but above or below it. In fact most home use video projectors shoot the projection at an angle vertically for this reason, some expensiver ones allow to adjust the lens responsible for the lens shift. Can we emulate this? i.imgur.com/nlRI4kZ.png $\endgroup$
    – Leo Ervin
    Commented Sep 26, 2017 at 20:49
  • $\begingroup$ Some video projectors dont allow to adjust the lens shift and have a feature called "keystone correction" which just crops and deofrms the projecton so it will look proportionally correct. This cropping obviously causes loss of resolution and is a digital edit to the video rather than an optical effect like the "lens shift" is. $\endgroup$
    – Leo Ervin
    Commented Sep 26, 2017 at 20:52
12
$\begingroup$

Particle Simulation

You could use a particle simulation, to simulate the light rays. In this setup, the particles get emitted from the faces of the hemisphere, with initial velocity (along the face normals). The particles duplicate a simple icoshpere. On the left wall, set up a DynamicPaint canvas, the icospheres serve as Brushes. Bake your simulation and bake the DynamicPaint sequence.

enter image description here

The Camera

The cameras folcal lenght serves as a reference for the "Ray casting". A boolean cuts out the other parts of the emitter. The spheres collide with the curved wall and create a different hit pattern.

Rendering

Use a long shutter (I used 48) to make the rays visible. Change the Motion Blur to a more pleasing curve (I used a very soft falloff). Sorry for the low amount of samples, it just renders too slow for fast testing.

$\endgroup$
5
  • $\begingroup$ Very interesting way to do this, but it doesn't seem like it would work with a video texture. $\endgroup$ Commented Sep 25, 2017 at 17:24
  • 1
    $\begingroup$ Ignoring the original purpose of this, it just looks really cool. Love the glowing particles. $\endgroup$
    – SilverWolf
    Commented Sep 25, 2017 at 18:10
  • $\begingroup$ A script could be written to render this for each frame of a video file I guess. Still pretty clever to use particles for simulating light beam paths. $\endgroup$
    – Leo Ervin
    Commented Sep 25, 2017 at 18:59
  • $\begingroup$ I could watch the slow-mo all day. Thinking outside the box, very nice $\endgroup$
    – bertmoog
    Commented Sep 25, 2017 at 20:44
  • 2
    $\begingroup$ Ingenious use of particles - very nice solution! $\endgroup$ Commented Sep 28, 2017 at 10:34
10
$\begingroup$

Scripting a reverse UV projection.

From the camera bounds shoot (raycast) an n x m resolution (simple grid uv) of rays onto the mirror object.

Estimate the normal from the hit face and the reflect vector is ray.reflect(normal). Project that vector onto the plane.

enter image description here

Debug result of shooting 16 x 9 UV onto sphere as mirror. The black arrows (empties) represent the reflection vector. The faces used to calculate the normal are selected. Notice the pole of the UVSphere is projected onto.

enter image description here

The resultant mesh and UV. Missing faces where a reflected ray misses plane. The pole effects the mesh on RHS.

enter image description here

Result of reflecting onto middle of sphere. All reflections captured UV resolution 160 x 90

Instructions

Grab the full script from below and edit to suit. Default uses a mirror object named Grid, reflects a (160, 90) UV map onto a plane with global location (0, -10, 0) facing (0, 1, 0)

debug = True
# projecting from scene.camera, change for different cam
mirror_obj_name = "Grid"
# reflection plane global cordinates
plane_co = Vector((0, -10, 0))
plane_no = Vector((0, 1, 0))
# stripes  The uv resolution to project from cam
XRES, YRES = (160, 90)
if debug:
    # adds an empty at hit point pointing in reflect
    XRES, YRES = (16, 9)

Make sure you are not in edit mode and run script. A new mesh object named "reflection" with result will be created. With debug = True a set of empties representing location and reflect angle will be placed on mirror.

Some notes:

Make the mirror object as high resolution as possible and make sure it has vertex normals. The raycast hit normal is calculated using barycentric to map the normal of the raycast hit, from a triangle of the normals of face centre and the edge in the tessellation triangle hit.

Code:

import bpy
import bmesh
from mathutils import Vector, Matrix
from mathutils.geometry import intersect_line_plane, barycentric_transform
from math import degrees, pi, radians

debug = True
# projecting from scene.camera, change for different cam
mirror_obj_name = "Grid"
# reflection plane global cordinates
plane_co = Vector((0, -10, 0))
plane_no = Vector((0, 1, 0))
# stripes  The uv resolution to project from cam
XRES, YRES = (160, 90)
if debug:
    # adds an empty at hit point pointing in reflect
    XRES, YRES = (16, 9)

def draw_empty_arrow(scene, name, loc, r_global, draw_size=1):
    R = r_global.to_track_quat('Z', 'X').to_matrix().to_4x4()
    mt = bpy.data.objects.new(name, None)
    R.translation = loc
    #mt.show_name = True 
    mt.matrix_world = R
    mt.empty_draw_type = 'SINGLE_ARROW'
    mt.empty_draw_size = draw_size
    scene.objects.link(mt)

def calc_normal(mesh, face_index, point):
    face = mesh.polygons[face_index]
    # debug TODO take out
    if debug:
        face.select = True
    o = face.center
    p = point - o
    # find the edge

    for ek in face.edge_keys:
        v0, v1 = (mesh.vertices[i] for i in ek)
        if (v0.co - o).angle(p) <= (v0.co - o).angle(v1.co - o):
            break

    return barycentric_transform(point, o, v0.co, v1.co,
                                face.normal, v0.normal, v1.normal)



def obj_raycast(obj, ray_origin, ray_target, matrix=None):
    # returns global coords
    if matrix is None:
        matrix = obj.matrix_world

    # get the ray relative to the object
    matrix_inv = matrix.inverted()
    ray_origin_obj = matrix_inv * ray_origin
    ray_target_obj = matrix_inv * ray_target
    ray_direction_obj = ray_target_obj - ray_origin_obj

    # cast the ray
    success, location, normal, face_index = obj.ray_cast(ray_origin_obj, ray_direction_obj)

    if success:
        n = calc_normal(obj.data, face_index, location)
        return matrix * location, (matrix * (location + n) - matrix * location).normalized(), face_index
    else:
        return None, None, None


context = bpy.context
scene = context.scene
render = scene.render
camera = scene.camera
# rays in global space
ray_origin = camera.location
cmw = camera.matrix_world
ar = render.resolution_y / render.resolution_x
s = Matrix.Scale(ar, 4, (0.0,1.0,0.0))
# find link to this pink, vertex code.
# get the camera frame 
tr, br, bl, tl = (cmw * s * p for p in camera.data.view_frame())  
# x direction stripe
origin = bl
x = br - bl
dx =  x.normalized()
# y direction stripes
y = tl - bl
dy = y.normalized()

obj = scene.objects.get(mirror_obj_name) # context.obj?
matrix = obj.matrix_world
mwi = matrix.inverted()

# reflection plane
plane_co = Vector((0, -10, 0))
plane_no = Vector((0, 1, 0))
strips = []
bm = bmesh.new()
uvs = {} # uv vert lookup
#make a mesh
me = bpy.data.meshes.new("reflection")

uv_layer = bm.loops.layers.uv.verify()
bm.faces.layers.tex.verify()

for s in range(YRES  + 1):
    py = bl + s * y.length / float(YRES) * dy
    strip = []
    for b in range(XRES  + 1):
        p = py + b * x.length / float(XRES) * dx
        ray_target = p
        hit, n, f = obj_raycast(obj, ray_origin, ray_target)

        if hit is not None:
            # check face normal vs returned normal
            face_normal = obj.data.polygons[f].normal

            v = (hit - ray_origin).normalized() # incoming vector in global space
            reflect =  v.reflect(n) # reflect vector
            v = ray_target - ray_origin
            r_global = reflect
            if debug:
                draw_empty_arrow(scene, "UV_%d_%d" % (b, s), hit, r_global)
            g = intersect_line_plane(hit,
                     hit + reflect, 
                     plane_co, plane_no, False)
            if reflect.angle(-plane_no) < radians(90):
                vert = bm.verts.new(g)
                strip.append(vert)
                uvs[vert] = (b / XRES, s / YRES)
            else:
                strip.append(None) # miss reflect
        else:
            strip.append(None) # miss on hit
    strips.append(strip)

# skin it
bm.verts.ensure_lookup_table()
for j in range(len(strips) - 1):
    s0 = strips[j]
    s1 = strips[j+1]
    for i in range(len(s0) - 1):
        verts = [s1[i+1], s0[i+1], s0[i], s1[i]]
        if None in verts:
            continue
        f = bm.faces.new(verts)
        # add uv
        for l in f.loops:
            luv = l[uv_layer]
            luv.uv = uvs[l.vert]

bm.to_mesh(me)

obj = bpy.data.objects.new("reflection", me)
#obj.matrix_world = matrix
scene.objects.link(obj)
if debug:
    bpy.ops.object.select_by_type(type='EMPTY')  
$\endgroup$
5
  • $\begingroup$ Pretty cool but the result doesn't seem right ( some faces on the edges seem to be missing) $\endgroup$
    – Leo Ervin
    Commented Sep 25, 2017 at 19:00
  • $\begingroup$ If the reflection vector is 90 degrees or more from the plane normal it wont be reflected back onto the plane, same as if the mirror doesn't fill the view... rays will miss. Would need to reflect onto a cylinder, or better still a sphere, to reflect all faces. $\endgroup$
    – batFINGER
    Commented Sep 26, 2017 at 3:23
  • $\begingroup$ @LeoErvin added image of reflection off sphere where all reflected rays are caught. (No missing faces) $\endgroup$
    – batFINGER
    Commented Sep 26, 2017 at 4:22
  • $\begingroup$ OK, perfect. Any way to simulate vertical lens shift found in video projector lenses? $\endgroup$
    – Leo Ervin
    Commented Sep 26, 2017 at 18:30
  • $\begingroup$ @LeoErvin The rays are projected from the camera. Adjiust the vertical shift on lens panel of camera data. $\endgroup$
    – batFINGER
    Commented Sep 27, 2017 at 4:25
9
$\begingroup$

Cycles is capable of this by using an OSL shader. The trick is to use the 'trace' function to trace multiple rays from the surface receiving the projection to the reflective surface and calculating the reflected ray using the surface normal at that point. Most of the rays will not find their way back to the projection source but some will. The shader sums the results of all those rays that reflect back to the projector source (or, at least, close enough).

animated

Obviously the more rays that are traced from the 'screen' out to the 'reflector', the more chance of hitting one which will make it back to the projection source - but also the more work will be involved in the render. Two factors control this - the 'resolution' and the 'threshold'. The 'resolution' forms a grid of points sent out from each point on the screen to the reflector - ie, resolution x resolution rays (doubling the resolution will result in 4 times as many rays). The 'threshold' is a measure of how close to the projection source the reflected ray is before it is considered as a 'hit', so smaller values for threshold will reduce the chance of any one ray from 'hitting' the projector but will give a higher quality result.

Here's my setup :

setup

I've used displacement to create a rippled reflector :

reflector

The OSL code - use the Text Editor, create a new block and paste this :

shader reflect_texture(
    vector Point = P,
    vector ReflectorTarget = P,
    vector ProjectorOrigin = P,
    string FileName = "",
    int Resolution = 10,
    float Threshold = 0.999,
    float ImageScale = 1.0,
    output color Color = color(0.5,0.5,0.5)
    ) {

    color Accumulation = color(0,0,0);
    int AccumulationCount = 0;
    float Distance = 0.0;
    vector Normal = vector(0,0,0);
    float Hit = 0.0;

    vector Target_to_Projector = ProjectorOrigin - ReflectorTarget;
    vector imageHorizVector = normalize(cross(Target_to_Projector, vector(0,0,1)));
    vector imageVertVector = normalize(cross(imageHorizVector, Target_to_Projector));

    float Step = 2.0 / Resolution;
    vector randomvect = noise("cell", Point*5000);
    float randoffsetx = randomvect[0]*Step;
    float randoffsetz = randomvect[2]*Step;


    for (float x = -1.0; x <= 1.0 ; x+= Step)
    {
        for (float z = -1.0; z <= 1.0 ; z+= Step)
        {
            vector ReflectorPoint = vector(ReflectorTarget[0]+x+randoffsetx, ReflectorTarget[1], ReflectorTarget[2]+z+randoffsetz);
            vector point_to_reflector = ReflectorPoint - Point;

            // trace the ray to find the normal at the point it hits
            if(trace(Point,normalize(point_to_reflector)))
            {
                getmessage("trace", "hitdist", Distance);
                getmessage("trace", "N", Normal);
                //getmessage("trace", "hit", Hit);
            }
            else
            {
                continue;
            }

            // reflect it
            vector reflected_ray = reflect(point_to_reflector, Normal);

            vector reflector_to_projectororigin = ProjectorOrigin - ReflectorPoint;

            vector separationVect = (normalize(reflector_to_projectororigin) - normalize(reflected_ray));
            float separationDist = sqrt(dot(separationVect, separationVect));

            if (separationDist < (Threshold*3)) 
            {
                //...it's a hit...
                float x_offset = (dot(imageHorizVector, normalize(reflected_ray)) / ImageScale ) + 0.5;
                float y_offset = (dot(imageVertVector, normalize(reflected_ray)) / ImageScale ) + 0.5;
                if ((x_offset >=0.0 ) && (x_offset <= 1.0) && (y_offset >= 0.0) && (y_offset <= 1.0))
                {
                    Accumulation += texture(FileName, x_offset, y_offset)/pow(2,separationDist/Threshold);
                    AccumulationCount++;
                }
            }
        }
    }
    Color = Accumulation/(Threshold*Threshold)/Resolution/Resolution/100;
}

Here's the material :

material

Set the Script node to the script and click the refresh button to ensure it's compiled. Note that for OSL you'll need to use a version of Blender compiled with OSL support and have enabled Open Shader Language in the Render properties. I've driven the input coordinates in the Combine XYZ nodes from the locations of empties within the scene to allow them to be easily manipulated. Note also that the FileName indicates the absolute path to your source image unless prefixed with ‘//‘ to indicate a path relative to your saved .blend (note that it cannot be a packed or ‘internal’ image).

Here's my source image :

blender logo

And here's the result :

rendered

Blend file attached

EDIT : Here's another rendered result to show the effect of reducing the Threshold for a sharper result. Here I decreased the Threshold to 0.001 and increased the Resolution to 200. I also added a Subsurface modifier to the reflector mesh (set to a factor of 3) and reduced the size of the ripples on the reflector so that the distortion was not so pronounced.

result - high quality

This is now sharp enough to make out the text at the bottom of the image (although it's obviously distorted by the reflection and is mirrored left to right). This obviously took much longer to render. Quality could be further improved by increasing the number of render samples and/or decreasing the Threashold parameter further (but requiring longer render times)

$\endgroup$
2
  • $\begingroup$ Great answer. What's the reason behind the random offset? $\endgroup$
    – batFINGER
    Commented Sep 28, 2017 at 11:49
  • 1
    $\begingroup$ Thanks. The random offset is to ensure a spread of points over the reflector - otherwise it will just hit the same fixed grid points each time. $\endgroup$ Commented Sep 28, 2017 at 12:09

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .