0
$\begingroup$

Context

I am working on project for implementing and user test a World-in-Miniature (WIM) interface for VR. A WIM essentially is a replica of the scene the user is currently in, but in miniature.

Features

  • the user can have control over the WIMs position, orientation, scale. Thus, allowing to dynamically view the scene from multiple viewpoints.
  • the user can choose from different methods to remove geometry from the scene, particularly to remove occluders such as wall of building floor, to view the objects in the room.
  • the user can interact with certain objects objects in the scene, by simply interacting with the proxy objects in the WIM replica.

Question

What I am interested in is a way to take a scene of relatively complex geometry and automatically render a miniature replica of it for example in a small rectangle in from of the user (i.e sort of like a 3D map). It would have to be real-time, and performed during run-time, as the viewpoint of the user towards the WIM will change during interaction (e.g. if he for example rotates, or scales up, or down the current rendered selection).

Examples

$\endgroup$

1 Answer 1

0
$\begingroup$

The first thing that jumped into my head when I saw this was Multiview. You are probably already using it to render the scene if you are rendering to VR anyway. (if not...you should)

For those that don't know about Multiview: This is a way to cast the same draw calls to multiple layers of a framebuffer image with a single draw call. It allows you to use a mask for which layers to broadcast to, so it is easy to turn broadcasting on and off (making it quick and easy to turn the WIM on and off). A new variable is available in shaders that allows the code to make adjustments for the view/projection matrix usually but the code can get much more sophisticated as needed. Its only downside is it requires relatively modern hardware.

Using multiview the cpu would set up 4 view matrices, two for the VR left and right eye, and two for the WIM left and right eye. The view could then add a constant scale factor so the WIM is sized to fit, and the camera can be positioned as needed.

To improve performance you will probably want to break it up into multiple passes where one pass renders everything that is common to both scenes and the second pass renders more expensive effects for the main scene. Again, as needed.

The last step would be merging the two scenes, probably just by bliting the WIM over the main scene using a unique background color of the WIM as a clear alpha color.

I won't go into the details of implementing multivew but googling multiview for the API you are using should give some good tutorials.

Scene interaction is simplified since the CPU only really ever works with a single scene and the only tricky part is figuring out whether the user is interacting with the WIM or the main scene. This would effect how the ray is being cast into the scene (if you use ray casting for selecting objects).

I just noticed that multiview is a opengl/vulkanism I'm not sure what the directx equivalent is.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.