Context
I am working on project for implementing and user test a World-in-Miniature (WIM) interface for VR. A WIM essentially is a replica of the scene the user is currently in, but in miniature.
Features
- the user can have control over the WIMs position, orientation, scale. Thus, allowing to dynamically view the scene from multiple viewpoints.
- the user can choose from different methods to remove geometry from the scene, particularly to remove occluders such as wall of building floor, to view the objects in the room.
- the user can interact with certain objects objects in the scene, by simply interacting with the proxy objects in the WIM replica.
Question
What I am interested in is a way to take a scene of relatively complex geometry and automatically render a miniature replica of it for example in a small rectangle in from of the user (i.e sort of like a 3D map). It would have to be real-time, and performed during run-time, as the viewpoint of the user towards the WIM will change during interaction (e.g. if he for example rotates, or scales up, or down the current rendered selection).
Examples
- Original Paper: https://www.youtube.com/watch?v=Ytc3ix-He4E
- An example I found running on an Oculus: https://www.youtube.com/watch?v=zOTx_CWsR7g&t=105s