6
$\begingroup$

In rasterization, at least in the context of game development, it is a common to have many instances of the same 3D object in a scene (think many identical rocks in different sizes/positions/rotations) and to render them by only putting the 3D object in GPU memory once and just updating the model matrix every time. This works relatively easily in rasterization since the rendering is done "object by object", but it allows to be way more efficient than if we were to upload several copies of the same mesh.

I was wondering: is that possible/usually done in raytracing? I was looking at the code from Raytracing in one Weekend and the author allocates one new sphere every time he adds a sphere to the scene:

for (int a = -11; a < 11; a++) {
    for (int b = -11; b < 11; b++) {
        auto choose_mat = random_double();
        point3 center(a + 0.9*random_double(), 0.2, b + 0.9*random_double());

        if ((center - point3(4, 0.2, 0)).length() > 0.9) {
            shared_ptr<material> sphere_material;

            if (choose_mat < 0.8) {
                // diffuse
                auto albedo = color::random() * color::random();
                sphere_material = make_shared<lambertian>(albedo);
                world.add(make_shared<sphere>(center, 0.2, sphere_material));
            } else if (choose_mat < 0.95) {
                // metal
                auto albedo = color::random(0.5, 1);
                auto fuzz = random_double(0, 0.5);
                sphere_material = make_shared<metal>(albedo, fuzz);
                world.add(make_shared<sphere>(center, 0.2, sphere_material));
            } else {
                // glass
                sphere_material = make_shared<dielectric>(1.5);
                world.add(make_shared<sphere>(center, 0.2, sphere_material));
            }
        }
    }
}

I wonder: is this done just for simplicity (after all it is a beginners raytracing course) or is it not an usual practice to reuse geometry in such renderers (possibly because the algorithms do not make it possible, as they evaluate the whole scene ar once)?

$\endgroup$
5
  • 1
    $\begingroup$ You just transform the rays with the inverse of the object transform to make instances work. The ray tracing in one weekend book is targeted at beginners hence why it is not in there. Almost every other ray-tracing book has it. $\endgroup$
    – lightxbulb
    Commented Dec 6, 2021 at 11:22
  • $\begingroup$ So what would the geometric operation be exactly? $T^{-1}*r$? (Where $r$ is the ray and $T$ the transform matrix) $\endgroup$
    – Ilya
    Commented Dec 6, 2021 at 14:40
  • 1
    $\begingroup$ Let the ray's origin be $\vec{o}$ and the direction $\vec{d}$, assume $T$ is a 4x4 transformation matrix. Extend $\vec{o}$ and $\vec{d}$ to homogeneous coordinates respectively as a point and direction: $(\vec{o}, 1)$ and $(\vec{d}, 0)$, now transform: $(\vec{o}',1) = T^{-1}(\vec{o},1)$ and $(\vec{d}', 0) = T^{-1}(\vec{d},0)$. You can overload the multiplication operator in C++ for arguments (matrix, ray) to perform the above operation. You can store the inverse matrix with the object so you don't have to recompute it for every ray. Note that your ray direction may become non-unit length. $\endgroup$
    – lightxbulb
    Commented Dec 6, 2021 at 16:47
  • $\begingroup$ Is it a good practice to re-normalize the ray afterwards or would I lose information? $\endgroup$
    – Ilya
    Commented Dec 6, 2021 at 20:14
  • $\begingroup$ It's fine to renormalize. Depending on implementation details, e.g. keeping track of some $t$ value in the coordinate system of the ray, some changes may be in order. But that is implementation specific. $\endgroup$
    – lightxbulb
    Commented Dec 6, 2021 at 20:15

1 Answer 1

4
$\begingroup$

Example code tends to be bad code from a performance (or even software engineering) standpoint because... it's an example. Its primary purpose is to make it clear how to do a thing, not to do it in an efficient or well-structured way.

Ray tracing can reference "meshes" or any object definition just fine. You have the data for some mesh or whatever with its own model space, and you have zero or more instances of those objects in the world. Those instances each come with per-instance data, like their transformation into world space, any bounding volumes used for quick ray intersection tests, and attachments to whatever your spatial subdivision scheme is to make ray tracing fast.

In ray tracing, when testing ray intersection with an object in the world, you often transform the ray from world-space into the model space of that object. You then do intersection tests in the space of the mesh. That way, you're not transforming thousands of vertices or whatever for each ray intersection test. Once you get the intersection point+normal, you can transform those back into world-space to do world-space computations.

This also allows you to have a single "mesh" for basic objects like spheres and cubes. Scales in the model transform can be used to stretch spheres and cubes, thus allowing you to create a sphere/cube of any size. Also any rectangular parallelopiped. Or really, any parallelogram.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.